Massive Leak Shows Erotic Chatbot Users Turned Women’s Yearbook Pictures Into AI Porn

403 Media
www.404media.co
2025-11-19 15:20:59
Chatbot roleplay and image generator platform SecretDesires.ai left cloud storage containers of nearly two million of images and videos exposed, including photos and full names of women from social media, at their workplaces, graduating from universities, taking selfies on vacation, and more....
Original Article

An erotic roleplay chatbot and AI image creation platform called Secret Desires left millions of user-uploaded photos exposed and available to the public. The databases included nearly two million photos and videos, including many photos of completely random people with very little digital footprint.

The exposed data shows how many people use AI roleplay apps that allow face-swapping features: to create nonconsensual sexual imagery of everyone, from the most famous entertainers in the world to women who are not public figures in any way. In addition to the real photo inputs, the exposed data includes AI-generated outputs, which are mostly sexual and often incredibly graphic. Unlike “nudify” apps that generate nude images of real people, these images are putting people into AI-generated videos of hardcore sexual scenarios.

Secret Desires is a browser-based platform similar to Character.ai or Meta’s AI avatar creation tool, which generates personalized chatbots and images based on user prompting. Earlier this year, as part of its paid subscriptions that range from $7.99 to $19.99 a month, it had a “face swapping” feature that let users upload images of real people to put them in sexually explicit AI generated images and videos. These uploads, viewed by 404 Media, are a large part of what’s been exposed publicly, and based on the dates of the files, they were potentially exposed for months.

About an hour after 404 Media contacted Secret Desires on Monday to alert the company to the exposed containers and ask for comment, the files became inaccessible. Secret Desires and CEO of its parent company Playhouse Media Jack Simmons did not respond to my questions, however, including why these containers weren’t secure and how long they were exposed.

💡

Do you have a tip about AI and porn? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

The platform was storing links to images and videos in unsecured Microsoft Azure Blob containers, where anyone could access XML files containing links to the images and go through the data inside. A container labeled “removed images” contained around 930,000 images, many of recognizable celebrities and very young looking women; a container named “faceswap” contained 50,000 images; and one named “live photos,” referring to short AI-generated videos, contained 220,000 videos. A number of the images are duplicates with different file names, or are of the same person from different angles or cropping of the photos, but in total there were nearly 1.8 million individual files in the containers viewed by 404 Media.

The photos in the removed images and faceswap datasets are overwhelmingly real photos (meaning, not AI generated) of women, including adult performers, influencers, and celebrities, but also photos of women who are definitely not famous. The datasets also include many photos that look like they were taken from women’s social media profiles, like selfies taken in bedrooms or smiling profile photos.

In the faceswap container, I found a file photo of a state representative speaking in public, photos where women took mirror selfies seemingly years ago with flip phones and Blackberries, screenshots of selfies from Snapchat, a photo of a woman posing with her university degree and one of a yearbook photo. Some of the file names include full first and last names of the women pictured. These and many more photos are in the exposed files alongside stolen images from adult content creators’ videos and websites and screenshots of actors from films. Their presence in this container means someone was uploading their photos to the Secret Desires face-swapping feature—likely to make explicit images of them, as that’s what the platform advertises itself as being built for, and because a large amount of the exposed content is sexual imagery.

Some of the faces in the faceswap containers are recognizable in the generations in the “live photos” container, which appears to be outputs generated by Secret Desires and are almost entirely hardcore pornographic AI-generated videos. In this container, multiple videos feature extremely young-looking people having sex.

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser

“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”

404 Media Samantha Cole

In early 2025, Secret Desires removed its face-swapping feature. The most recent date in the faceswap files is April 2025. This tracks with Reddit comments from the same time, where users complained that Secret Desires “dropped” the face swapping feature. “I canceled my membership to SecretDesires when they dropped the Faceswap. Do you know if there’s another site comparable? Secret Desires was amazing for image generation,” one user said in a thread about looking for alternatives to the platform. “I was part of the beta testing and the faceswop was great. I was able to upload pictures of my wife and it generated a pretty close,” another replied . “Shame they got rid of it.”

In the Secret Desires Discord channel, where people discuss how they’re using the app, users noticed that the platform still listed “face swapping” as a paid feature as of November 3. As of writing, on November 11, face swapping isn’t listed in the subscription features anymore. Secret Desires still advertises itself as a “spicy chatting” platform where you can make your own personalized AI companion, and it has a voice cloning mode, where users can upload an audio file of someone speaking to clone their voice in audio chat modes.

On its site, Secret Desires says it uses end-to-end encryption to secure communications from users: “All your communications—including messages, voice calls, and image exchanges—are encrypted both at rest and in transit using industry-leading encryption standards. This ensures that only you have access to your conversations.” It also says stores data securely: “Your data is securely stored on protected servers with stringent access controls. We employ advanced security protocols to safeguard your information against unauthorized access.”

The prompts exposed by some of the file names are also telling of how some people use Secret Desires. Several prompts in the faceswap container, visible as file names, showed users’ “secret desire” was to generate images of underage girls: “17-year-old, high school junior, perfect intricate detail innocent face,” several prompts said, along with names of young female celebrities. We know from hacks of other “AI girlfriend” platforms that this is a popular demand of these tools; Secret Desires specifically says on its terms of use that it forbids generating underage images.

Screenshot of a former version of the subscription offerings on SecretDesires.ai, via Discord. Edits by the user

Secret Desire runs advertisements on Youtube where it markets the platform’s ability to create sexualized versions of real people you encounter in the world. “AI girls never say no,” an AI-generated woman says in one of Secret Desire’s YouTube Shorts. “I can look like your favorite celebrity. That girl from the gym. Your dream anime character or anyone else you fantasize about? I can do everything for you.” Most of Secret Desires’ ads on YouTube are about giving up on real-life connections and dating apps in favor of getting an AI girlfriend. “What if she could be everything you imagined? Shape her style, her personality, and create the perfect connection just for you,” one says . Other ads proclaim that in an ideal reality, your therapist, best friend, and romantic partner could all be AI. Most of Secret Desires’ marketing features young, lonely men as the users.

We know from years of research into face-swapping apps, AI companion apps, and erotic roleplay platforms that there is a real demand for these tools, and a risk that they’ll be used by stalkers and abusers for making images of exes, acquaintances, and random women they want to see nude or having sex. They’re accessible and advertised all over social media , and that children find these platforms easily and use them to create child sexual abuse material of their classmates . When people make sexually explicit deepfakes of others without their consent, the aftermath for their targets is often devastating; it impacts their careers, their self-confidence , and in some cases, their physical safety . Because Secret Desires left this data in the open and mishandled its users’ data, we have a clear look at how people use generative AI to sexually fantasize about the women around them, whether those women know their photos are being used or not.

About the author

Sam Cole is writing from the far reaches of the internet, about sexuality, the adult industry, online culture, and AI. She's the author of How Sex Changed the Internet and the Internet Changed Sex.

Samantha Cole

How generative AI in Arc Raiders started a scrap over the gaming industry’s future

Guardian
www.theguardian.com
2025-11-19 15:00:01
The use of AI in the surprise game-of-the-year contender has sparked a heated cultural and ethical debate, and raised existential questions for artists, writers and voice actors • Don’t get Pushing Buttons delivered to your inbox? Sign up here Arc Raiders is, by all accounts, a late game-of-the-year...
Original Article

A rc Raiders is, by all accounts, a late game-of-the-year contender. Dropped into a multiplayer world overrun with hostile drones and military robots, every human player is at the mercy of the machines – and each other. Can you trust the other raider you’ve spotted on your way back to humanity’s safe haven underground, or will they shoot you and take everything you’ve just scavenged? Perhaps surprisingly, humanity is (mostly) choosing to band together, according to most people I’ve talked to about this game.

In a review for Gamespot , Mark Delaney paints a beguiling picture of Arc Raiders’s potential for generating war stories, and highlights its surprisingly hopeful tone as the thing that elevates it above similar multiplayer extraction shooters: “We can all kill each other in Arc Raiders. The fact that most of us are choosing instead to lend a helping hand, if not a sign that humanity will be all right in the real world, at the very least makes for one of the best multiplayer games I’ve ever played.”

But, but, but, but … There is a small irony to Arc’s depiction of humanity united against the machines. The game uses AI-generated text-to-speech voices, trained on real actors. (The game also uses machine learning to improve the behaviour and animation of its robot enemies, a different type of “AI”, which video games have been using for ever.) Games writer Rick Lane found this to be so ethically compromising that he couldn’t look past it. “For Arc Raiders to ride the wave of human sociability all the way to the bank, while also being so contemptuous of the thing that makes us social animals – carving up human voices and reassembling them like a digital Victor Frankenstein – demonstrates a lack of artistic integrity that I find impossible to ignore,” he wrote for Eurogamer .

Generative AI in video game development is becoming a red-line issue for many players (although it’s impossible to tell how many – neither social media outrage nor Steam forum sentiment are reliable predictors of how most people actually feel). It gives a lot of people, myself included, the ick. Last week, the new Call of Duty also came under fire (sorry) for using supposedly AI-generated art; people absolutely hate it. Proponents of the use of generative AI in games often say that it empowers smaller developers to do more with less, but Call of Duty is a multibillion-dollar franchise. Activision can more than afford to pay artists to draw something. Given Arc Raiders’s success, you could say the same about its AI voice lines.

It is an existential issue for video game workers – artists, writers and voice actors particularly, but also coders – who may be at risk of losing out to this technology. Many believe that gaming’s corporate overlords would be thrilled to replace expensive, inconvenient humans with machines that generate inadequate but functional work. Take EA, which is mandating that its employees use the company’s internal suite of AI tools, even though they are apparently widely hated . And then there’s Krafton, which proudly declared itself an AI-first game developer before offering its Korean employees voluntary redundancy .

A man with a gun in Call of Duty: Black Ops 7
Under fire … Call of Duty: Black Ops 7 has been called out for using AI-generated art. Photograph: Activision

Indeed, most of the people rushing to defend the use of generative AI in games are not everyday players or on-the-ground developers, but the corporate class. Epic’s Tim Sweeney – net worth $5bn, give or take – posted on X a series of replies to Eurogamer’s Arc Raiders review, beginning with the familiar, facepalm-inducing entreaty to keep “politics” out of video game reviews (“Political opinions should go into op-eds folks.). Sweeney argued that generative AI could “transform gaming”, evoking a dystopian vision of the future: “Instead of games having a few dozen or hundred lines of prerecorded dialogue, how about infinite, context-sensitive, personality-reflecting dialogue based on and tuned by human voice actors?”

Personally, I do not want a machine constantly generating things it thinks I want to hear. I would rather have characters speak lines written by humans with something to say, performed by other humans who understand that meaning. As the award-winning video game actor Jane Perry put it in an interview with GamesIndustry.biz : “Will a bot scuttle up to the stage at the Games awards or the Baftas to accept an award for best performance? I think most audiences prefer a real human performance; however, the creative drive of the tech elite is incredibly strong, especially when the name of the game is to replace humans with machines.”

In my many years covering this beat, I have noticed that what happens in the video game world often happens in the wider world. A few years ago, there was a rush of investment in Web3/blockchain-driven games that bought into the idea of NFTs – digital “artworks” that people could own and trade, all of which were just unbelievably ugly, all rad skulls and gurning computer-generated apes smoking cigars; thankfully, that bubble burst spectacularly. When the big tech world suddenly latched on to the idea of the “metaverse” a few years ago, gaming companies had already been building much better versions of that idea for decades. And Gamergate provided a blueprint for the weaponisation of disaffected young men that directly influenced the Trump campaign playbook and set the template for the now omnipresent culture wars. This is why anyone interested in the impact of AI on work and culture should be looking at the ripples that that technology is making among developers and players. It can be an interesting predictor.

What we’re seeing play out looks like a familiar struggle between the people who actually make things, and those who profit off that labour. We’re also seeing players question whether they should pay the same money for games that include lower-quality, machine-generated art and voices. And we are seeing new lines being drawn around which uses of AI are culturally and ethically acceptable, and which are not.

What to play

Goodnight Universe.
A plot less travelled … Goodnight Universe. Photograph: Nice Dream/Skybound Games

From the people behind the devastating Before Your Eyes comes Goodnight Universe , a game in which you play a super-intelligent six-month-old baby with psychic powers . It’s narrated by the baby’s inner monologue: wee Isaac suspects that he’s a lot smarter than a baby should be, and finds it exceptionally frustrating that he seems unable to communicate his thoughts and feelings to his family. But soon he develops telekinetic abilities and the power to read minds, attracting unwanted attention. If you have a webcam, you can play it with your eyes, by looking around and blinking. This game packs an emotional punch and the plot also goes places I wasn’t expecting. It also made me nostalgic for the relative past, when my children were still babies.

Available on: PC, Nintendo Switch 2, PS5, Xbox
Estimated playtime:
three to four hours

What to read

Benjamin Evan Ainsworth as Link and Bo Bragason as Zelda in The Legend of Zelda film
First look … Benjamin Evan Ainsworth as Link and Bo Bragason as Zelda in The Legend of Zelda film, coming in 2027. Photograph: Nintendo/Sony
  • Nintendo has released the first image from the forthcoming Legend of Zelda movie , starring Bo Bragason and Benjamin Evan Ainsworth, pictured here lounging in a meadow. In it, Link looks very Ocarina of Time; I am reassured that Princess Zelda is holding a bow, which hopefully indicates she’ll be a part of the action rather than a damsel in distress.

  • The nominations for December’s Game awards are out, led by Ghost of Yōtei, Clair Obscur: Expedition 33 and Death Stranding 2. (The Guardian has been a voting outlet for the awardspreviously, but is not this year.) As we reported last week , the annual event recently dropped its Future Class programme for up-and-coming developers, who have described feeling like props.

  • A band of modders have brought Sony’s infamously cancelled shooter Concord back to life – but the company has brought down the ban hammer, issuing take-down notices for gameplay footage shared on YouTube. Its servers are still up – for now.

skip past newsletter promotion

What to click

Question Block

Cyrodiil in The Elder Scrolls: Oblivion.
Fantasy universe … Cyrodiil in The Elder Scrolls: Oblivion. Photograph: Bethesda Game Studios

Reader Jude asks this week’s question:

“I started No Man’s Sky recently. It’s the first game I’ve ever played that feels like it could, at some point , turn into something to live in – like Ready Player One, or the now ubiquitous Japanese isekai scenario [where characters are sucked into an alternate world]. Does anybody else out there have a game they could live in?”

I had this feeling when I first played Oblivion, 20 years ago. Playing the remaster, I now find this notion laughable, but at the time I thought the game had everything I needed – towns and cities and delicious-looking food and books. It has interesting people and anthropomorphic lions and lizards, magic and weapons and vampires. If I could have, I would have lived in Cyrodiil , from The Elder Scrolls (above). It seems small now, compared to modern open-world games, but I think if I were to spend hours jacked into some kind of fantasy universe instead of my actual life, I wouldn’t want a world that’s overwhelmingly huge. I’d want one that’s comfortingly conquerable.

I can think of plenty of virtual places I wouldn’t want to live – World of Warcraft’s Azeroth is too dangerous, the Mushroom Kingdom is so colourful it would hurt your brain, and don’t get me started on Elden Ring’s Lands Between . Hyrule is too lonely; with No Man’s Sky, it’s mostly the other players that make it interesting.

I’ll throw this one out to the readership: is there a video game universe you’d want to inhabit?

If you’ve got a question for Question Block – or anything else to say about the newsletter – hit reply or email us on pushingbuttons@theguardian.com .

Buckingham Palace Christmas market: why tourists flocked there – and found just locked gates and big puddles

Guardian
www.theguardian.com
2025-11-19 14:56:17
The hot spot seemed the perfect place for Yuletide-loving royalists. But, as with the Eiffel Tower in Beijing and some of the most picturesque windmills in the Netherlands, there was much less to it than first met the eye ... Name: Buckingham Palace Christmas market. Age: Brand new this year. Contin...
Original Article

Name: Buckingham Palace Christmas market.

Age: Brand new this year.

Appearance: Fantastical.

Really? A Christmas market? At Buckingham Palace? Yes: broad paths lined with wooden huts, festooned with lights and Christmas trees, “a beautiful winter wonderland atmosphere” – all within the forecourt of the royal palace.

It sounds almost too good to be true. Does it? But look at the picture!

I am. Where are all those lights hanging from? They’re just floating. It’s part of the magic.

And there’s snow on the ground. When was this photograph taken? Don’t worry about that – come see for yourself. There are plentiful trains to London, and they’re all free!

Wait – is this a hoax? It bears some of the hallmarks of a hoax, yes.

Such as? Fake AI-generated pics of a Christmas market at Buckingham Palace plastered all over TikTok, Facebook and Instagram.

To what end? It’s a mystery. More than one account has posted a variety of these AI fakes, with no apparent intent.

Other than disappointing Yuletide-loving royalists? It has certainly done that. Lots of visitors have reported turning up to find nothing but locked gates, security fencing and puddles.

So there’s no truth to it? There is a festive pop-up in the Royal Mews gift shop round the corner from the palace gates, with royal-themed Christmas gifts and a single hut selling hot drinks out back.

That’s not the same thing. So the Royal Collection Trust has been obliged to clarify. “There will not be a Christmas market at Buckingham Palace,” it says.

Is this kind of AI hoaxing common these days? I’m afraid it is. In July, it was reported that elderly tourists were being lured to Perak in Malaysia by video of a cable car ride that doesn’t exist.

That’s unbelievable . And tour company Amsterdam Experiences is reporting an increase in customers requesting trips to unreal Dutch locations .

The windmills of their minds? Windmills situated alongside picturesque canals and tulip fields they have only seen in AI-generated images.

When will people learn? Not soon, it seems. Tourists who use AI services to plan their travels can find themselves stranded on remote mountaintops in Japan, or looking for an Eiffel Tower in Beijing .

I’m not normally one to make snap judg ments, but if you use AI to plan your holiday, you’re pretty misguided. Maybe, but apparently 30% of international travellers do this now.

Do say: “Never travel anywhere without first verifying that that place actually exists.”

Don’t say: “I’m looking for the front gates to Jurassic Park. Is it behind the carpet warehouse, or what?”

Morning Spew: Eric Adams Is Done With New York City

hellgate
hellgatenyc.com
2025-11-19 14:54:06
The mayor is everywhere but here, and more links for your day....
Original Article

Have you been listening to the Hell Gate Podcast? You can catch last week's episode here .

And join us for one final meal at Hell Gate's Eric Adams Table of Success. As the Eric Adams era reaches its conclusion, we're gathering everyone who's made it to the bitter end, for the Table of Success: The Last Supper . You can make your haters your waiters, but you can't escape…The Last Supper.

We get it: It's not easy to be a lame duck mayor, still clocking in for a few more months in a city where people have made it exceedingly clear that they don't like you or the work you're doing. That probably feels bad. It's like your partner dumped you, but you have to keep living together for a while because it's hard to find a new apartment.

On the other hand, if you love New York City and its people; if there are still pieces of your agenda to make their lives better that you want to advance; if you want to be on hand to steer the ship and offer reassurance in case of some unforeseen crisis; if you take your sworn duty as a public servant seriously—well, then you suck it up and stick it out.

Not Eric Adams . Mayor Adams—jilted so hard by New Yorkers that not only did his own reelection campaign fizzle before the finish line but the candidate he endorsed in his stead took a drubbing too—is clearly done with New York.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

How to Stay Sane in a World That Rewards Insanity

Hacker News
www.joanwestenberg.com
2025-11-19 14:40:38
Comments...
Original Article

Somewhere around 2016, the smartest people I knew started saying increasingly stupid things.

These were folks who could parse dense academic papers, who understood reason, who were entirely capable of holding two competing ideas in their heads without their brains short-circuiting.

But something changed.

One friend became “convinced” that every major news story was manufactured consent. Another started treating political disagreement as evidence of moral corruption. A third began using the word "liberal" as if it was a personality disorder rather than loose coalitions of sometimes contradictory beliefs.

The common thread: their extreme positions got them more of what they wanted. The friend who saw conspiracies everywhere built a following. Then an audience. Then a 7-figure income stream. The one who tribalized every issue found a ready-made community that validated every prior. Etc, etc.

The incentive gradient was clear: sanity was expensive, and extremism paid dividends.

We talk a lot about polarization as if it were a disease that infected society, but we’re missing a key data point: polarization is a growth hack, and it works.

It delivers results.

When you pick a side and commit to it wholly and without reservation, you get things that moderate positions cannot provide. You get certainty in an uncertain world. You get a community that will defend you. You get a simple heuristic for navigating complex issues.

Above all: you get engagement, attention and influence.

The writer who says "this issue has nuance and I can see valid concerns on multiple sides" gets a pat on the head and zero retweets. The influencer who says "everyone who disagrees with me on this is either evil or stupid" gets quote-tweeted into visibility and gains followers who appreciate their approximation of clarity.

The returns on reasonableness have almost entirely collapsed.

Which begs the question: why resist? If extremism delivers what people want, maybe we should just let it run its course and stop clutching our pearls?

The problem is what happens when everyone optimizes for the same short-term wins.

You end up in a world where changing your mind becomes impossible because you've built your entire identity around being right. Where admitting uncertainty is social suicide. Where every conversation is a performance for your tribe rather than an actual exchange of ideas. You lose the ability to solve problems that don't fit neatly into your ideological framework, which turns out to be most important problems.

Someone who goes all-in on ideological purity might start with a few strong opinions. Then those opinions attract an audience. That audience expects consistency. Any deviation gets punished. So they double down. They have to keep escalating to maintain their position, finding new heresies to denounce, new lines to draw. They've locked themselves into a trajectory they can't escape without losing everything they've built.

They're prisoners of their own brand.

Scale this up and you get a society where nobody can back down, where every disagreement = existential, where we've lost the ability to make tradeoffs // acknowledge complexity.

The incentives push us toward positions that feel good but make us collectively stupider.

And you can't opt out by just accepting your side lost.

You're stuck in stupid-world too.

So how do you actually stay sane?

  1. Start by diversifying your information diet in ways that feel actively uncomfortable. The goal isn't to agree with everything you read. You'll still think most of it is wrong. But exposing yourself to articulate versions of positions you oppose does something valuable: it makes you realize that intelligent people can disagree with you without being monsters or morons. This sounds obvious when written out, but your social media feed has spent years training you to believe otherwise.

  2. Second, practice distinguishing between stakes and truth. Just because an issue matters doesn't mean every claim about it is correct, and just because you've picked a side doesn't mean you have to defend every argument your side makes. The tribal logic says you have to accept the whole package, but that logic is selling you certainty you haven't earned.

  3. Third, find (or at least, look for) communities that reward humility, not tribal loyalty. These are rare, but they exist. They're the group chats where someone can say "I changed my mind about this" without being treated like a traitor. They're the forums where "I don't know" is an acceptable answer. They're the relationships where you can test ideas without performing for an audience. You cannot be reasonable in isolation. You need a small group of people who value truth-seeking over status games, and you need to invest in those relationships deliberately.

That all sounds hard.

Is it worth it?

That’s an individual choice.

You'll lose: reach, influence, certainty, the comfort of being part of something larger than yourself.

You'll gain: the ability to think clearly, the capacity to update your beliefs when evidence changes, relationships based on something other than shared enemies, and the possibility of being right in ways that matter.

These trades won't feel equivalent. The losses are immediate and visceral. The gains are distant and abstract. When you refuse to join the mob, you feel it right away. When you maintain your ability to think independently, the benefits accrue slowly over years.

The discount rate on sanity is brutal.

But consider the alternative.

The people I knew who went all-in on extremism got what they wanted in the short term. Some built audiences. Some found communities. Some gained certainty. Most of ‘em made bank. But they're trapped by their earlier positions. They can't update without admitting they were wrong, and admitting they were wrong would cost them their community. They've optimized themselves into a local maximum they can't escape. They won the game by its current rules and lost something harder to quantify.

The world will keep offering you bad trades, will keep rewarding positions you know are too simple to be true. Every day you'll watch people cash in their nuance for influence. Every day you'll be tempted to do the same. The only defense is to remember that some things compound differently than others.

Extremism gives you a fast start and a ceiling.

Sanity gives you a slow start and no limit to how far you can grow.

Remember: the world only rewards insanity because we're measuring the wrong timeframe.

Check back in ten years.

Field Notes on Now.

Proxmox Virtual Environment 9.1 available

Hacker News
www.proxmox.com
2025-11-19 14:35:17
Comments...
Original Article

VIENNA, Austria – November 19, 2025 – Leading open-source server solutions provider Proxmox Server Solutions GmbH (henceforth "Proxmox"), today announced the immediate availability of Proxmox Virtual Environment 9.1. The new version introduces significant enhancements across container deployment, virtual machine security, and software-defined networking, offering businesses greater flexibility, performance, and operational control.

Highlights in Proxmox Virtual Environment 9.1

Create LXC containers from OCI images

Proxmox VE 9.1 integrates support for Open Container Initiative (OCI) images, a standard format for container distribution. Users can now download widely-adopted OCI images directly from registries or upload them manually to use as templates for LXC containers. Depending on the image, these containers are provisioned as full system containers or lean application containers. Application containers are a distinct and optimized approach that ensures minimal footprint and better resource utilization for microservices. This new functionality means administrators can now deploy standardized applications (e.g., a specific database or API service) from existing container build pipelines quickly and seamlessly through the Proxmox VE GUI or command line.

Support for TPM state in qcow2 format

This version introduces the ability to store the state of a virtual Trusted Platform Module (vTPM) in the qcow2 disk image format. This allows users to perform full VM snapshots, even with an active vTPM, across diverse storage types like NFS/CIFS. LVM storages with snapshots as volume chains now support taking offline snapshots of VMs with vTPM states. This advancement improves operational agility for security-sensitive workloads, such as Windows deployments that require a vTPM.

Fine-grained control of nested virtualization

Proxmox VE now offers enhanced control for nested virtualization in specialized VMs. This feature is especially useful for workloads such as nested hypervisors or Windows environments with Virtualization-based Security (VBS). A new vCPU flag allows to conveniently and precisely enable virtualization extensions for nested virtualization. This flexible option gives IT administrators more control and offers an optimized alternative to simply exposing the full host CPU type to the guest.

Enhanced SDN status reporting

Version 9.1 comes with an improved Software-Defined Networking (SDN) stack, including detailed monitoring and reporting in the web interface. The GUI now offers more visibility into the SDN stack, displaying all guests connected to local bridges or VNets. EVPN zones additionally report the learned IPs and MAC addresses. Fabrics are integrated into the resource tree, showing routes, neighbors, and interfaces. The updated GUI offers visibility into key network components like IP-VRFs and MAC-VRFs. This enhanced observability simplifies cluster-wide network troubleshooting and monitoring of complex network topologies, without the need for the command line.

Availability

Proxmox Virtual Environment 9.1 is immediately available for download. Users can obtain a complete installation image via ISO download, which contains the full feature-set of the solution and can be installed quickly on bare-metal systems using an intuitive installation wizard.

Seamless distribution upgrades from older versions of Proxmox Virtual Environment are possible using the standard APT package management system. Furthermore, it is also possible to install Proxmox Virtual Environment on top of an existing Debian installation. As Free/Libre and Open Source Software (FLOSS), the entire solution is published under the GNU AGPLv3.

For enterprise users, Proxmox Server Solutions GmbH offers professional support through subscription plans. Pricing for these subscriptions starts at EUR 115 per year and CPU. A subscription provides access to the stable Enterprise Repository with timely updates via the web interface, as well as to certified technical support and is recommended for production use.

Resources:

###

Facts
The open-source project Proxmox VE has a huge worldwide user base with more than 1.6 million hosts. The virtualization platform has been translated into over 31 languages. More than 225,000 active community members in the support forum engage with and help each other. By using Proxmox VE as an alternative to proprietary virtualization management solutions, enterprises are able to centralize and modernize their IT infrastructure, and turn it into a cost-effective and flexible software-defined data center, based on the latest open-source technologies. Tens of thousands of customers rely on enterprise support subscriptions from Proxmox Server Solutions GmbH.

About Proxmox Server Solutions
Proxmox provides powerful and user-friendly open-source server software. Enterprises of all sizes and industries use the Proxmox solutions to deploy efficient and simplified IT infrastructures, minimize total cost of ownership, and avoid vendor lock-in. Proxmox also offers commercial support, training services, and an extensive partner ecosystem to ensure business continuity for its customers. Proxmox Server Solutions GmbH was established in 2005 and is headquartered in Vienna, Austria.

Contact: Daniela Häsler, Proxmox Server Solutions GmbH, marketing@proxmox.com

New WrtHug campaign hijacks thousands of end-of-life ASUS routers

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 14:35:15
Thousands of ASUS WRT routers, mostly end-of-life or outdated devices, have been hijacked in a global campaign called Operation WrtHug that exploits six vulnerabilities. [...]...
Original Article

New WrtHug campaign hijacks thousands of end-of-life ASUS routers

Thousands of ASUS WRT routers, mostly end-of-life or outdated devices, have been hijacked in a global campaign called Operation WrtHug that exploits six vulnerabilities.

Over the past six months, scanners looking for ASUS devices compromised in Operation WrtHug identified "roughly 50,000 unique IPs" around the globe.

Most of the compromised devices have IP addresses located in Taiwan, while others are distributed across Southeast Asia, Russia, Central Europe, and the United States.

Wiz

Notably, there are no observed infections within China, which may indicate a threat actor from this country, but researchers found insufficient evidence for high-confidence attribution.

According to SecurityScorecard’s STRIKE researchers, based on targeting and attack methods, there may be a connection between Operation WrtHug and AyySSHush campaign , first documented by GreyNoise in May.

WrtHug spread
WrtHug global spread
Source: SecurityScorecard

WrtHug attacks

The attacks begin with the exploitation of command injection flaws and other known vulnerabilities in ASUS WRT routers, mostly AC-series and AX-series devices.

According to STRIKE researchers, the WrtHug campaign may leverage the following security issues in attacks:

  • CVE-2023-41345/46/47/48 – OS command injection via token modules
  • CVE-2023-39780 – major command injection flaw (also used in the AyySSHush campaign)
  • CVE-2024-12912 – arbitrary command execution
  • CVE-2025-2492 – improper authentication control that can lead to unauthorized execution of functions

Of the vulnerabilities above, CVE-2025-2492 stands out as the only one with a critical severity score. A security advisory from ASUS in April warned about the severity of the flaw and that it could be triggered by a crafted request on routers that have the AiCloud feature enabled.

In a report today, SecurityScorecard says that "attackers seemingly leveraged the ASUS AiCloud service in this case to deploy a targeted global intrusion set."

An indicator of compromise for this campaign is the presence of a self-signed TLS certificate in AiCloud services that replaced the standard one generated by ASUS in 99% of the breached devices. The new certificate captured attention because it has a lifetime of 100 years, compared to the original, which is valid for only 10 years.

STRIKE researchers used this unique certificate to identify 50,000 infected IPs.

The malicious certificate
The malicious certificate
Source: SecurityScorecard

Like in the AyySSHush campaign, the attackers do not upgrade the firmware of the compromised device, leaving it open to takeover by other threat actors.

Based on indicators of compromise, the researchers identified the following ASUS devices being targeted by Operation WrtHug:

• ASUS Wireless Router 4G-AC55U
• ASUS Wireless Router 4G-AC860U
• ASUS Wireless Router DSL-AC68U
• ASUS Wireless Router GT-AC5300
• ASUS Wireless Router GT-AX11000
• ASUS Wireless Router RT-AC1200HP
• ASUS Wireless Router RT-AC1300GPLUS
• ASUS Wireless Router RT-AC1300UHP

STRIKE believes that the compromised routers may be used as operational relay box (ORB) networks in Chinese hacking operations as stealth relay nodes, proxying, and hiding command-and-control infrastructure. However, the report does not delve into post-compromise operations and lacks specific details.

ASUS has issued security updates that address all of the vulnerabilities leveraged in the WrtHug attacks, so router owners should upgrade their firmware to the latest available version.

If the device is no longer under support, users are recommended to replace it or at least disable remote access features.

ASUS recently also fixed CVE-2025-59367 , an authentication bypass flaw impacting several AC-series models, which, while not exploited yet, could be added to the attackers’ arsenal soon.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

The hidden risks in your DevOps stack data—and how to address them

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 14:20:29
DevOps repos on GitHub, GitLab, Bitbucket, and Azure DevOps face risks from weak access controls, misconfigurations, outages, and accidental deletions. GitProtect provides automated, immutable backups and fast recovery to secure your DevOps data. [...]...
Original Article

GitProtect

While DevOps drives innovation and simplifies collaboration, it also comes with its own set of risks and vulnerabilities. Developers rely on Git-based platforms like GitHub, Azure DevOps, Bitbucket, or GitLab to work on code.

Repositories usually contain mission-critical data, and with growth, teams expand and their workflows get more complex — all leading to more potential risks that could affect your data.

The Shared Responsibility model

The division of duties in regards to SaaS data protection is outlined using platform-specific shared responsibility models. You, as a customer, are responsible for the data stored on your SaaS accounts. Platforms like GitHub are not obligated to help you with data recovery.

The service provider is responsible for the uptime of their service, while the users’ duty is the security of data, accounts, and devices.

That means users must implement strict access controls, protect credentials, and leverage automated backups; all to secure data against ransomware attacks, human errors like accidental deletions, and service disruptions. Moreover, SaaS platforms themselves advise their users to implement their own backups.

Security differences between platforms

The leading distributed VCS platforms, like GitLab, offer built-in security features. These can help with building a cyber defence strategy. The specific controls and tools differ in each platform and range from PATs to access controls and regular reviews.

GitHub

In GitHub, users get native controls that include secret scanning, push protection, code security features like dependency review, and Dependabot alerts.

Push protection is on by default for new public repos, and it is blocking known secrets at push. Secret scanning is also enabled for all public repos and can be extended to private ones.

It is advised to enforce MFA and branch protection across all projects.

Bitbucket

Bitbucket has hierarchical access, with team/group controls. Also, project-level permissions apply to all repos in that project unless they are tightened.

Security largely depends on admins regularly reviewing group scopes and repo privacy. Bitbucket Secret Scanning feature helps with monitoring commits and pushes for exposed credentials.

Make sure to configure pipeline variables and avoid exposing sensitive data. It’s worth noting that Bitbucket integrates with the suite of Atlassian tools and services, such as Jira.

GitLab

GitLab comes as a comprehensive DevSecOps platform, covering source code management, CI/CD, and security testing.

Risks mainly come up in self-managed deployments where admins are responsible for hardening, patching, and backups.

GitLab’s guidance in their documentation assigns patching and host security to self-managed customers. Be sure to implement strict role segregation and keep runners isolated.

Azure DevOps

Microsoft’s Azure DevOps integrates with identity management via Microsoft Entra ID (SSO, MFA, Conditional Access).

A strong security posture for Azure DevOps data requires correctly configuring service connections and layered project/organization permissions.

Microsoft emphasizes customer responsibility for Azure DevOps configuration according to the Shared Responsibility Model.

Common DevOps security gaps & challenges

The data, along with configurations, stored in platforms like Bitbucket, are essential for modern software development. Therefore, your source code is a great target for cyber attacks or insider threats. These bad actors demand ransom as they gain access to your data that business continuity and security rely on.

It’s important to shift security to the left and address the industry-known vulnerabilities.

Common vulnerabilities include:

  • Weak access control

  • Improper repository permissions and configurations

  • No multi-factor authentication (MFA) or single sign-on (SSO)

  • Outdated systems & workflows

  • No automated backup (or treating GitHub, GitLab, Azure DevOps, or Bitbucket as backup)

  • Lack of tested disaster recovery strategies

  • Non-compliance with industry regulations

For example, there was a supply-chain attack targeting a popular GitHub Action called ‘tj-actions/changed-files’. The attackers published a malicious update under the same package name that was used across thousands of repositories, potentially exposing repository data and CI/CD secrets.

Attacks vectors

There are different ways attackers can exploit vulnerabilities to access your data. They range from phishing and credential theft to ransomware attacks. Ransomware encrypts or erases your data — but how it is done depends on the platform:

Platform

How it is abused

Why it enables ransomware

Preventive measures

GitHub

Stolen PATs/OAuth tokens, malicious GitHub Actions, compromised CI runners

Tokens & malicious Actions can write/delete repos, push malicious commits, poison dependencies, or encrypt artifacts

Fine-grained PATs, SSO & MFA, allowlist Actions, ephemeral runners, secret scanning, off-platform immutable backups

GitLab

Compromised self-managed runners or admin accounts, insecure runners execute arbitrary jobs

These compromised runners/admins allow attackers to delete or alter repos, alter CI, or remove local backups stored on the same nodes

Ephemeral/isolation for runners, restrict who can register runners, strict role separation, timely patching, external immutable backups (including config & metadata)

Bitbucket

Excessive project permissions, leaked pipeline variables, abused integrations/service hooks

Cloud credentials or pipeline secrets let attackers access artifact stores, mirrors, or cloud backups to encrypt/delete

Tighten project/repo permissions, rotate keys, use variables properly, restrict third-party apps, external immutable backups

Azure DevOps

Compromised Entra (Azure AD) accounts, over-privileged service connections, misconfigured pipelines

Service connections & Azure resource access enable encryption of artifacts, deletion of backups, and destructive pipeline jobs at scale

Enforce conditional access & MFA, least-privilege service connections, restrict pipeline identities, segregate backup storage outside tenancy

Accidental deletion

Another risk is the potential for accidental deletions and malicious insiders doing damage from within the organization. This can be as simple as a mistyped command or excessive privileges leading to project deletion, but it can be devastating in the long run without backup or flexible recovery options.

Malicious insiders can intentionally disrupt operations or disable logging. Both cases can result in lost repo history, costly recovery, erased & lost data, as well as paused business operations.

Service outages

Software development teams face service outages of critical platforms they rely on. Downtime means no access to important repositories and CI/CD pipelines, which could completely stop business operations. The consequences range from missed deadlines and a lack of customer trust to wasted resources.

How to improve the security of your DevOps data

To address all of the abovementioned risks and secure data on git-hosting platforms, organizations must shift security left, and adhere to compliance requirements of industry regulations. It is important to remember that secrets should never be stored in repositories.

Access management

Strict access control means implementing RBAC (Role-based access control) and following the principle of the least privilege.

This way, permissions are adjusted specifically to each role and assigned accordingly, with no excessive access given to any user. All permissions should be verified regularly and inactive accounts revoked.

Backup and disaster recovery

A third-party backup and disaster recovery solution such as GitProtect is like a safety net. When choosing a solution, seek full coverage for your DevOps stack (project data, repositories, and all the metadata). Ideally, backups should be automated, encrypted, geo-redundant, and stored in WORM-compliant, immutable format.

This should be completed by a flexible recovery arsenal: granular restore, cross-over recovery, point-in-time restore, and full data recovery.

When backup and disaster recovery solutions check those boxes, you guarantee ransomware protection, compliance with industry standards, and adherence to the 3-2-1 backup rule. Other critical aspects include monitoring and audit preparedness, an intuitive user interface, along with alerts, notifications, and clear logs.

Ensure compliant DevOps backup and recovery with a 14-day trial of GitProtect . No credit card required!

Sponsored and written by GitProtect .

Security updates for Wednesday

Linux Weekly News
lwn.net
2025-11-19 14:08:18
Security updates have been issued by Debian (pdfminer), Fedora (chromium and firefox), Mageia (bubblewrap, flatpak, cups-filters, and thunderbird), Oracle (container-tools:rhel8, kernel, and squid), Red Hat (kernel), Slackware (libarchive), SUSE (gimp, itextpdf, kernel, thunderbird, and unbound), an...
Original Article

Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds

The Cloudflare Outage May Be a Security Roadmap

Krebs
krebsonsecurity.com
2025-11-19 14:07:03
An intermittent outage at Cloudflare on Tuesday briefly knocked many of the Internet's top destinations offline. Some affected Cloudflare customers were able to pivot away from the platform temporarily so that visitors could still access their websites. But security experts say doing so may have als...
Original Article

An intermittent outage at Cloudflare on Tuesday briefly knocked many of the Internet’s top destinations offline. Some affected Cloudflare customers were able to pivot away from the platform temporarily so that visitors could still access their websites. But security experts say doing so may have also triggered an impromptu network penetration test for organizations that have come to rely on Cloudflare to block many types of abusive and malicious traffic.

At around 6:30 EST/11:30 UTC on Nov. 18, Cloudflare’s status page acknowledged the company was experiencing “an internal service degradation.” After several hours of Cloudflare services coming back up and failing again, many websites behind Cloudflare found they could not migrate away from using the company’s services because the Cloudflare portal was unreachable and/or because they also were getting their domain name system (DNS) services from Cloudflare.

However, some customers did manage to pivot their domains away from Cloudflare during the outage. And many of those organizations probably need to take a closer look at their web application firewall (WAF) logs during that time, said Aaron Turner , a faculty member at IANS Research .

Turner said Cloudflare’s WAF does a good job filtering out malicious traffic that matches any one of the top ten types of application-layer attacks , including credential stuffing, cross-site scripting, SQL injection, bot attacks and API abuse. But he said this outage might be a good opportunity for Cloudflare customers to better understand how their own app and website defenses may be failing without Cloudflare’s help.

“Your developers could have been lazy in the past for SQL injection because Cloudflare stopped that stuff at the edge,” Turner said. “Maybe you didn’t have the best security QA [quality assurance] for certain things because Cloudflare was the control layer to compensate for that.”

Turner said one company he’s working with saw a huge increase in log volume and they are still trying to figure out what was “legit malicious” versus just noise.

“It looks like there was about an eight hour window when several high-profile sites decided to bypass Cloudflare for the sake of availability,” Turner said. “Many companies have essentially relied on Cloudflare for the OWASP Top Ten [web application vulnerabilities] and a whole range of bot blocking. How much badness could have happened in that window? Any organization that made that decision needs to look closely at any exposed infrastructure to see if they have someone persisting after they’ve switched back to Cloudflare protections.”

Turner said some cybercrime groups likely noticed when an online merchant they normally stalk stopped using Cloudflare’s services during the outage.

“Let’s say you were an attacker, trying to grind your way into a target, but you felt that Cloudflare was in the way in the past,” he said. “Then you see through DNS changes that the target has eliminated Cloudflare from their web stack due to the outage. You’re now going to launch a whole bunch of new attacks because the protective layer is no longer in place.”

Nicole Scott , senior product marketing manager at the McLean, Va. based Replica Cyber , called yesterday’s outage “a free tabletop exercise, whether you meant to run one or not.”

“That few-hour window was a live stress test of how your organization routes around its own control plane and shadow IT blossoms under the sunlamp of time pressure,” Scott said in a post on LinkedIn. “Yes, look at the traffic that hit you while protections were weakened. But also look hard at the behavior inside your org.”

Scott said organizations seeking security insights from the Cloudflare outage should ask themselves:

1. What was turned off or bypassed (WAF, bot protections, geo blocks), and for how long?
2. What emergency DNS or routing changes were made, and who approved them?
3. Did people shift work to personal devices, home Wi-Fi, or unsanctioned Software-as-a-Service providers to get around the outage?
4. Did anyone stand up new services, tunnels, or vendor accounts “just for now”?
5. Is there a plan to unwind those changes, or are they now permanent workarounds?
6. For the next incident, what’s the intentional fallback plan, instead of decentralized improvisation?

In a postmortem published Tuesday evening, Cloudflare said the disruption was not caused, directly or indirectly, by a cyberattack or malicious activity of any kind.

“Instead, it was triggered by a change to one of our database systems’ permissions which caused the database to output multiple entries into a ‘feature file’ used by our Bot Management system,” Cloudflare CEO Matthew Prince wrote. “That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.”

Cloudflare estimates that roughly 20 percent of websites use its services, and with much of the modern web relying heavily on a handful of other cloud providers including AWS and Azure , even a brief outage at one of these platforms can create a single point of failure for many organizations.

Martin Greenfield , CEO at the IT consultancy Quod Orbis , said Tuesday’s outage was another reminder that many organizations may be putting too many of their eggs in one basket.

“There are several practical and overdue fixes,” Greenfield advised. “Split your estate. Spread WAF and DDoS protection across multiple zones. Use multi-vendor DNS. Segment applications so a single provider outage doesn’t cascade. And continuously monitor controls to detect single-vendor dependency.”

What happens when even college students can't do math anymore?

Hacker News
www.theatlantic.com
2025-11-19 14:05:21
Comments...
Original Article

For the past several years, America has been using its young people as lab rats in a sweeping, if not exactly thought-out, education experiment. Schools across the country have been lowering standards and removing penalties for failure. The results are coming into focus.

Five years ago, about 30 incoming freshmen at UC San Diego arrived with math skills below high-school level. Now, according to a recent report from UC San Diego faculty and administrators, that number is more than 900—and most of those students don’t fully meet middle -school math standards. Many students struggle with fractions and simple algebra problems. Last year, the university, which admits fewer than 30 percent of undergraduate applicants, launched a remedial-math course that focuses entirely on concepts taught in elementary and middle school. (According to the report, more than 60 percent of students who took the previous version of the course couldn’t divide a fraction by two.) One of the course’s tutors noted that students faced more issues with “logical thinking” than with math facts per se. They didn’t know how to begin solving word problems.

The university’s problems are extreme, but they are not unique. Over the past five years, all of the other University of California campuses, including UC Berkeley and UCLA, have seen the number of first-years who are unprepared for precalculus double or triple. George Mason University, in Virginia, revamped its remedial-math summer program in 2023 after students began arriving at their calculus course unable to do algebra, the math-department chair, Maria Emelianenko, told me.

“We call it quantitative literacy, just knowing which fraction is larger or smaller, that the slope is positive when it is going up,” Janine Wilson, the chair of the undergraduate economics program at UC Davis, told me. “Things like that are just kind of in our bones when we are college ready. We are just seeing many folks without that capability.”

Part of what’s happening here is that as more students choose STEM majors, more of them are being funneled into introductory math courses during their freshman year. But the national trend is very clear: America’s students are getting much worse at math. The decline started about a decade ago and sharply accelerated during the coronavirus pandemic. The average eighth grader’s math skills, which rose steadily from 1990 to 2013, are now a full school year behind where they were in 2013, according to the National Assessment of Educational Progress, the gold standard for tracking academic achievement. Students in the bottom tenth percentile have fallen even further behind. Only the top 10 percent have recovered to 2013 levels.

On the one hand, this means that math scores are close to where they were in the 1970s—hardly the Dark Ages. On the other hand, losing 50 years’ worth of math-education progress is a clear disaster. How did this happen? One theory is that the attention-shredding influence of phones and social media is to blame. The dip in math scores coincides with the widespread adoption of smartphones; by 2015, nearly three-quarters of high-school-aged kids had access to one. A related possibility is that technology is making students complacent. Emelianenko told me that students “are just not engaged in math classes anymore”; they seem to believe that they don’t need to learn math, because they can use AI instead.

Or maybe students have stopped achieving in math because schools have stopped demanding it of them. During the George W. Bush administration, federal policy emphasized accountability for public schools. Schools that saw poor performance on standardized tests received increased funding at first, but if scores still didn’t improve, they had their funding pulled. Research suggests that this helped improve math outcomes, particularly for poor Black students. After 2015, however, the federal government backed off from its accountability measures, which had faced bipartisan criticism. (Some teachers’ unions and progressive parents wanted less emphasis on standardized tests, and some conservative politicians wanted the federal government to remove itself from education policy.) Many schools across the country have shifted toward making math engaging for students at the expense of evidence-based teaching practices. And due to funding shortages or misguided efforts to improve equity, many students are held back from taking the hardest math courses.

The pandemic supercharged the decline. Districts that spent most of the 2020–21 school year mandating remote learning saw students fall more than half a grade behind in math; districts that reopened earlier saw more modest declines. These difficulties prompted teachers to further relax their standards. “Everyone was just exhausted and challenged by the circumstances around the pandemic,” Joshua Goodman, a Boston University professor of economics and education, told me. “And I think one of the reactions to that was for everyone involved to say: ‘Let’s lower our expectations. Let’s make sure that we don’t fail students when they’re not doing their work, because the world is challenging right now.’” Many districts adopted a “no zeros” policy, forcing teachers to pass students who had little command of the material. One study of public-school students across Washington State found that almost none received an F in spring 2020, while the share of students who received A’s skyrocketed. Math grades have remained elevated in the years since.

Together, these changes meant that even as students’ math preparation was stagnating, their grades were going up. The UC San Diego report notes that more than a quarter of the students who placed into the elementary- and middle-school-level remedial course last year had earned straight A’s in their high-school math classes. Almost all of them had taken advanced math courses in high school.

At the same time, the UC system eliminated its best tool for assessing students’ academic preparedness. In 2020, system leaders voted to phase standardized-test scores out of admissions decisions. They argued that the tests worsened racial divides and unfairly privileged wealthy students. But SAT and ACT scores are the most reliable predictors of a student’s math ability, the report found. “It’s not really surprising, then, that you’re going to be admitting more students who aren’t ready for mathematics, because you removed the one piece of data that would have told you that,” Morgan Polikoff, an education professor at the University of Southern California, told me. That same year, the UC system dramatically increased the number of students it enrolled from under-resourced high schools. These students are much more likely to place into Math 2, the elementary- and middle-school-level remedial course.

The new report calls on the UC system to consider reinstating the use of standardized-test scores in admissions, and for UC San Diego to bring its enrollment of students from under-resourced schools back in line with that of other selective UC colleges. “Admitting large numbers of students who are profoundly underprepared risks harming the very students we hope to support, by setting them up for failure,” the report observes.

Bringing back standardized-test scores might help elite institutions get out of the remedial-math business, but it will not address the underlying problem of widespread innumeracy. “Regardless of what a university is doing in terms of its admissions process, American students have been getting weaker in terms of their math skills for about the past decade,” Goodman told me. Already, researchers predict a massive economic cost from declining quantitative skills.

Dan Goldhaber, the director of the Center for Education Data & Research at the University of Washington, told me that he doesn’t know of anyone who denies that young people are much worse at math than they used to be. Instead, most of the arguments for optimism hinge on the idea that students might no longer need foundational math skills, because they could use AI instead—an idea he thinks is absurd.

The other academics I spoke with tended to agree. “Who is going to trust somebody who got a degree in airline engineering who doesn’t know how to think through a problem without a computer telling them the answer?” Brian Conrad, a Stanford math professor, told me. “The premise that foundational ideas don’t need to be learned anymore is a recipe for idiocracy.”

What Makes the Intro to Crafting Interpreters so Good?

Lobsters
refactoringenglish.com
2025-11-19 14:03:35
Comments...
Original Article

One of my favorite programming books is Crafting Interpreters by Bob Nystrom. It teaches you how to build a programming language from scratch. Along the way, you learn about text parsing, data structures, virtual machines, and several other skills that make you a stronger developer.

I was re-reading the book recently and realized that its introduction is delightfully effective. Developers are terrible at writing introductions , so it’s worth studying what makes the Crafting Interpreters intro so compelling.

Here’s the first paragraph of the book:

I’m really excited we’re going on this journey together. This is a book on implementing interpreters for programming languages. It’s also a book on how to design a language worth implementing. It’s the book I wish I’d had when I first started getting into languages, and it’s the book I’ve been writing in my head for nearly a decade.

What makes a good introduction? 🔗

When you’re deciding whether to read a technical book, you typically have two questions:

  1. Is this relevant to me?
  2. Is this worth my time?

A good introduction answers those questions quickly and clearly.

How does Crafting Interpreters perform against these criteria? I’ll break the opening paragraph down bit-by-bit:

I’m really excited we’re going on this journey together.

At first, this seems like a waste of a line. It doesn’t tell the reader anything about what they’ll learn or why it’s valuable. This sentence could be the opening line to any book ever, so why say it at all?

This line is subtly effective, and I’ll explain why soon.

This is a book on implementing interpreters for programming languages. It’s also a book on how to design a language worth implementing.

These lines are a direct answer to, “Is this relevant to me?” They tell you exactly what you’re going to learn from this book. If you’re interested in programming languages, you’re in the right place.

There are many books about programming languages, compilers, and interpreters, so why would you read this particular one?

It’s the book I wish I’d had when I first started getting into languages, and it’s the book I’ve been writing in my head for nearly a decade.

This is the knockout punch.

I find this line extremely compelling. If I’m interested in learning about languages, of course I want to learn from a guy who’s been thinking about these things for ten years !

I also deeply appreciate when developers write for their past selves. There’s so much institutional knowledge in software that’s inaccessible simply because nobody bothered to go back and explain it. I love it when people stop and smooth out the path for whoever follows.

This line is also effective at clarifying who the book is for. The mention of “first getting started” means this book is for programmers who are at the novice or early intermediate stages of learning language design.

This is not the Dragon Book 🔗

For most developers, creating your own programming language from scratch feels impossible. It’s the type of thing a genius at Bell Labs can do in the 1970s but not something that’s within reach of a regular developer whose workday consists of team standups and chasing bugs around a web app.

One of the most popular books about creating a programming language is Compilers: Principles, Techniques, and Tools , which people typically refer to as the “Dragon Book.”

Here’s the first sentence of the Dragon Book:

Programming languages are notations for describing computations to people and to machines.

“Notations for describing computations?” Does it get more stuffy and academic?

I can’t imagine anyone I know explaining programming as the act of creating “notations for describing computations.” I understand how that language resonates with some developers, but my reaction reading it is, “This book is written for computer science theorists and not regular developers like me.”

I now return to the opening sentence of Crafting Interpreters , which initially felt like a waste of a line:

I’m really excited we’re going on this journey together.

How does the line feel now? Unpretentious, accessible, and welcoming. It’s what a child might say to you on a trip to the zoo.

Bob Nystrom is telling you that it’s okay that you’re a regular person and not a Computer Science PhD. Compilers and interpreters don’t have to be scary — a regular developer like you can build one with a good guide.

What makes the intro so good? 🔗

Returning to the question of what makes the intro to Crafting Interpreters so good, it boils down to these four things:

  1. It tells you what you’ll learn.
  2. It explains why you might care to learn it.
  3. It establishes the casual, approachable tone of the book.
  4. It achieves all of the above in the first four sentences.

If you haven’t read Crafting Interpreters , I highly recommend it. The full book is available for free online , but there are also print and ebook versions. I’m not even interested in language design, but I still found the book extremely valuable and engaging.

To give you a sense of the care that Bob Nystrom put into his book, he hand drew all 181 illustrations . The lettering in the illustrations looks like a font, but that’s because he spent hours and hours practicing his freaking penmanship for this book! How many software authors can say that?

Podcast: The Epstein Email Dump Is a Mess

403 Media
www.404media.co
2025-11-19 13:57:26
We talk the terrible format of the latest Epstein dump; how a contractor is hiring randos on LinkedIn to physically track immigrants for $300; and a new code of conduct in the adult industry....
Original Article

We talk the terrible format of the latest Epstein dump; how a contractor is hiring randos on LinkedIn to physically track immigrants for $300; and a new code of conduct in the adult industry.

Podcast: The Epstein Email Dump Is a Mess

We start this week with a rant from Jason about how the latest dump of Epstein emails were released. It would be a lot easier to cover them if they were published differently! After the break, we talk about Joseph’s piece about a contractor hiring essentially randos off LinkedIn to physically track immigrants for $300. In the subscribers-only section, Sam tells us about a new adult industry code of conduct that has been a long time coming

Listen to the weekly podcast on Apple Podcasts , Spotify , or YouTube . Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Europe's cookie nightmare is crumbling. EC wants preference at browser level

Hacker News
www.theverge.com
2025-11-19 13:54:40
Comments...
Original Article

Tom Warren

is a senior editor and author of Notepad , who has been covering all things Microsoft, PC, and tech for over 20 years.

The EU’s cookie consent policies have been an annoying and unavoidable part of browsing the web in Europe since their introduction in 2018. But the cookie nightmare is about to crumble thanks to some big proposed changes announced by the European Commission today .

Instead of having to click accept or reject on a cookie pop-up for every website you visit in Europe, the EU is preparing to enforce rules that will allow users to set their preferences for cookies at the browser level. “People can set their privacy preferences centrally — for example via the browser — and websites must respect them,” says the EU. “This will drastically simplify users’ online experience.”

This key change is part of a new Digital Package of proposals to simplify the EU’s digital rules, and will initially see cookie prompts change to be a simplified yes or no single-click prompt ahead of the “technological solutions” eventually coming to browsers. Websites will be required to respect cookie choices for at least six months, and the EU also wants website owners to not use cookie banners for “harmless uses” like counting website visits, to lessen the amount of pop-ups.

The sheer amount of cookie pop-ups across Europe means people often just click any button to get access to a website, simply because of the annoyance instead of worrying about their privacy. “This is not a real choice made by citizens to protect their phones or computers and to choose what happens to their data,” says the European Commission. “Today’s proposal modernizes the ‘cookies rules’, with the same strong protections for devices, allowing citizens to decide what cookies are placed on their connected devices (e.g. phones or computers) and what happens to their data.”

The EU’s latest proposals will now head to the European Parliament. They’ll need to be approved by the EU’s 27 member states during a process that could take some time yet, but Europe’s cookie nightmare looks a big step closer to being over.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

"Data Crunch": AI Boom Threatens to Entrench Fossil Fuels and Compromise Climate Goals

Democracy Now!
www.democracynow.org
2025-11-19 13:51:18
A new report titled “Data Crunch: How the AI Boom Threatens to Entrench Fossil Fuels and Compromise Climate Goals” from the Center for Biological Diversity warns the booming artificial intelligence industry’s high resource consumption threatens the world’s climate goals, desp...
Original Article

A new report titled “Data Crunch: How the AI Boom Threatens to Entrench Fossil Fuels and Compromise Climate Goals” from the Center for Biological Diversity warns the booming artificial intelligence industry’s high resource consumption threatens the world’s climate goals, despite rosy prognoses of AI’s projected benefits. Co-author Jean Su says that the increasing use of AI for military applications offsets any positives it offers for climate change mitigation. “What we need to do is empower communities and countries, especially in the Global South, to ask what is the public benefit that they are supposed to get from AI, and weigh it very carefully against the severe cost to their climate, to their electricity prices and to their water.”



Guests
  • Jean Su

    Energy Justice Director at the Center for Biological Diversity.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

CISA gives govt agencies 7 days to patch new Fortinet flaw

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 13:44:56
CISA has ordered U.S. government agencies to secure their systems within a week against another vulnerability in Fortinet's FortiWeb web application firewall, which was exploited in zero-day attacks. [...]...
Original Article

CISA

CISA has ordered U.S. government agencies to secure their systems within a week against another vulnerability in Fortinet's FortiWeb web application firewall, which was exploited in zero-day attacks.

Tracked as CVE-2025-58034 , this OS command injection flaw can allow authenticated threat actors to gain code execution in low-complexity attacks that don't require user interaction.

"An Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') vulnerability [CWE-78] in FortiWeb may allow an authenticated attacker to execute unauthorized code on the underlying system via crafted HTTP requests or CLI commands," Fortinet said on Tuesday.

Wiz

The cybersecurity agency added the vulnerability to its Known Exploited Vulnerabilities Catalog the same day, giving Federal Civilian Executive Branch (FCEB) agencies until Tuesday, November 25th, to secure their systems against attacks as mandated by the Binding Operational Directive (BOD) 22-01.

"This type of vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise," CISA warned.

"With recent and ongoing exploitation events [..], a reduced remediation timeframe of one week is recommended," it added, referring to a second FortiWeb flaw (CVE-2025-64446) exploited in zero-day attacks that Fortinet silently patched in late October.

On Friday, CISA also added the CVE-2025-64446 vulnerability to its catalog of actively exploited security flaws, ordering U.S. federal agencies to patch their devices by November 21st .

BleepingComputer has reached out to a Fortinet spokesperson with questions about these flaws, but we have yet to receive a response.

In August, Fortinet addressed another command injection vulnerability (CVE-2025-25256) in its FortiSIEM solution, following a GreyNoise report warning of a surge in brute-force attacks against Fortinet SSL VPNs.

Fortinet vulnerabilities are commonly exploited in cyber espionage and ransomware attacks . For instance, in February, Fortinet revealed that a Chinese hacking group tracked as Volt Typhoon exploited two FortiOS SSL VPN flaws to breach a Dutch Ministry of Defence military network using a custom remote access trojan (RAT) called Coathanger.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Sudanese Researcher Lina Yassin on COP30 Climate Talks, UAE-Funded Proxy War in Sudan over Gold & More

Democracy Now!
www.democracynow.org
2025-11-19 13:36:39
Sudanese climate diplomacy researcher Lina Yassin is supporting the Least Developed Countries Group at the U.N. climate summit in Belém, Brazil. The group is composed of 44 countries, including Sudan, whose cumulative emissions amount to less than 1% of total global emissions. “They are the co...
Original Article

Sudanese climate diplomacy researcher Lina Yassin is supporting the Least Developed Countries Group at the U.N. climate summit in Belém, Brazil. The group is composed of 44 countries, including Sudan, whose cumulative emissions amount to less than 1% of total global emissions. “They are the countries that have the least amount of resources to respond to the climate crisis,” explains Yassin.

Yassin also discusses the humanitarian crisis in Sudan, where the estimated death toll is now at 150,000. “This is a proxy war funded by foreign nationals who have vested interests in Sudan’s resources. … The UAE has been using the RSF militia to illegally smuggle gold out to finance the war and finance their own gold reserves. The UAE is also really interested in Sudan’s agricultural lands.”



Guests
  • Lina Yassin

    climate diplomacy researcher at the International Institute for Environment and Development in London.

Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Susana Muhamad, Ex-Colombian Environment Minister, on COP30 Talks, Trump, Gaza & More

Democracy Now!
www.democracynow.org
2025-11-19 13:19:04
At the U.N. Climate Change Conference in Belém, Brazil, we sit down with Colombian environmentalist Susana Muhamad, who served as Colombia’s minister of environment and sustainable development from 2022 to 2025. Muhamad discusses the U.N.'s mandate to mitigate the acceleration of human-caused ...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : This is Democracy Now! , democracynow.org. We’re broadcasting from the U.N. climate summit — that’s COP30 — from the Brazilian city of Belém, the gateway to the Amazon. I’m Amy Goodman.

As delegates and leaders representing more than 190 countries continue negotiations, the Brazilian presidency is expected to release a new draft text today addressing some of the most pressing climate demands, including financing and the transition away from fossil fuels. Brazilian President Luiz Inácio Lula da Silva is in Belém today as pressure is mounting to include a roadmap to phase out oil, coal and gas in the final climate text. He may pass us at any moment. More than 80 countries from Africa, Latin America, Asia and the Pacific, as well as European Union member states and the United Kingdom, have joined those calls, with Colombia leading the efforts.

This all comes as frustration is mounting over the refusal by wealthier nations and some of the world’s worst polluters to properly fund climate adaptation efforts for Global South countries most impacted by the climate crisis, but those that did not cause it.

We’re now joined by Susana Muhamad, longtime environmentalist. She served as Colombia’s minister of environment and sustainable development from 2022 to this year. She was also the president of last year’s United Nations Biodiversity Conference held in Cali, Colombia. She is of Palestinian descent.

We welcome you back to Democracy Now! , Susana. We have spoken to you at past COPs. You were the main climate negotiator at one point for Colombia. If you can initiate this global audience into what COP means, even that, the Conference of Parties 30, 30 years, why this is so important, and what’s happening today with the Brazilian president here, and what you’re seeing in the texts that have been presented?

SUSANA MUHAMAD : Thank you so much, Amy, and I’m so glad to be here again this year.

Well, it’s very important to let the audience understand what this environment means. COP is the Conference of the Parties. Who are the parties? And the parties to what? The parties are the countries of the world that have subscribed the Convention on Climate Change. And this convention is an international treaty signed by the countries to be able to control and stabilize the climate, because we knew since 30 years ago — and that’s why it’s called COP30, because it’s the 30th time that the parties meet to try to solve this issue.

But what’s the reality? You have faced it in the United States during these years, that things are getting out of control, the climate. And the emissions, the CO2 emissions that produce this climate crisis, are increasing, not decreasing, and also that the climate is becoming more wild, if we can say it in simple words, and more dangerous, the situation.

So, this 2025 COP is critical, because scientists said that we needed to cut emissions by 42% compared to 2019 in 2030, but the reality is that we are in a trajectory, rather than to decrease 42%, to increase 16%. And rather than to stabilize the climate in 1.5, we are actually going to a trajectory to have an increase in temperature of 2.8 Celsius. And just to give your audience a dimension of what that means, we have never experienced in the last two geological eras this temperature. Humanity, since it’s a species alive on planet Earth, has never experienced this temperature. We have no clue, no idea what this means. And when we could see that happening? At the end of this century. And who will be living this? The children that were already born in this century. So, that’s why this conference is the only fora that we have internationally, globally. So, countries, agree and take action.

AMY GOODMAN : I’m going to ask you about what Kumi Naidoo yesterday, the South African climate activist, called “the F-word”: fossil fuels. Now, what would it mean to include the phaseout of fossil fuels in the final climate text? And how is it possible that that hasn’t happened before?

SUSANA MUHAMAD : Well, it happened for the first time in Dubai, which was actually a very interesting process, because it was in the Arab world, in the United Arab Emirates. And you know that the largest reserves of oil in the world are there, and the cheapest ones. So, it was a big effort that we could include that an objective is to phase out fossil fuels, which actually was not even that language. It was “transition away” from fossil fuels, because it’s a transition. But what does that mean? That we should change our source of energy. We should stop producing, extending the frontier of production, and also stop the consumption and burning of fossil fuels. And that is a whole change.

So, now what happens is that it has been two years since Dubai, and the Brazilian government actually gave licenses for exploration in the Amazon, in the Amazon Delta, outside in deep waters where the Amazon River comes, just weeks before this conference. It’s a very contradictory message, because, you know, the Amazon is the — is one of the pillars of climate stability. The Amazon absorbs so much CO2. And what happens when we create deforestation is that every hectare deforested emits CO2 and stops absorbing. So, if we lose the Amazon, which means deforestation of around 25%, we actually lose the control of the climate. And we are in the Amazon. This COP is in the Amazon in 2025, five years before we need to reach the goal to stabilize the climate. And the Brazilian government gives licenses for exploration of oil a few weeks before.

So, the elephant in the room is the fossil fuels. The Arab countries don’t want to speak about it. Actually, they are trying to bring back the decision of Dubai and say, “Oh, we are — we don’t feel that that’s correct,” which was a big effort. But the good news is that yesterday a coalition of 80 countries stood up and said, “We have to develop a pathway for the transition. We cannot leave this COP .” So, the elephant in the room has become now the main issue. And it has light, and it’s now in negotiated text. And now President Lula is coming today, and Brazil is now committed that this roadmap advances.

But I give you a sober fact. Right now countries are planning to expand fossil fuel productions in 120% more than what is required to stabilize the climate at 1.5 Celsius. And for you who are seeing us out there, remember what was the year 2024, and think about in your local area how was the climate that year. That was the first year that we reached, as an average temperature in the planet, 1.5 Celsius. I remember in my country, 400 municipalities out of 1,100 were without water. The largest city, Bogotá, went into rationalizing water for one year. We had fires. We have drought. And it was actually really, like, if that’s going to be the new reality, I cannot imagine what is with 2.8. And that’s actually the environmental limiting we are working towards.

And so, to summarize, the struggle in this COP is between the fossil capital and the countries that want to continue to live on fossil fuels, and the other countries that actually say we have to put humanity, environmental security first and save the children that were already born in this century.

AMY GOODMAN : Susana Muhamad, you used to be the chief negotiator on climate at the COPs. You were the climate minister. So, can you talk about the power of the oil lobby? I mean, we’re talking about over 1,600, the largest group of lobbyists ever. Also, there are 900 Indigenous representatives. That is the largest group of Indigenous representation. So, talk about the balance. And also, are Indigenous demands being considered here? I mean, Indigenous leaders shut down this COP on Friday for hours.

SUSANA MUHAMAD : It’s exactly — I think that’s the picture of the 21st century. You want to summarize this issue in one picture? It is the picture of the woman from the Amazon, the Indigenous woman, blocking the entrance of the COP , and on the other side of the fence, the military protecting the fortress of the COP . What fortress are you protecting? You’re protecting the fossil fuel capitalists. That’s who are you protecting.

And actually, the influence in these rooms of the fossil fuel lobby is huge. That comes in two forms: petrostates that have — even Colombia has its own oil state company, but also private companies that have their lobbyists. And it becomes a battle of, actually, the fossil fuel capital resisting change. This is what COP has become. This is what it’s about now, resisting the change from the fossil fuel capital.

And outside the fence of the COP , outside the military, not 900 — 900 were the ones that got accredited. We have 4,000 Indigenous peoples from all the Amazon that came during 25 days by boat through the Amazon River, from all the communities, from the eight countries of the Amazon. They have what they call the Indigenous Camp. They are there outside. These people, they do have the knowledge to save the planet. They do have the knowledge to save the Amazon. And we are not talking about that here. So, that’s why, on Tuesday, actually, they came by force, and they took over these corridors. And that presence of what I call the blood, the sweat, the tears of the people at the frontlines of the climate crisis is what we need inside this room. And unfortunately, although we have to say it is much better in Brazil than when we had the COP in Egypt, in the United Arab Emirates and Azerbaijan, which we didn’t even were able to do the protest in the street, the march of climate — but that’s actually, I think, the forces in the 21st century, the people at the frontlines against the fossil fuel capital.

AMY GOODMAN : We only have a minute to go, but I wanted to ask you two quick questions. The significance of President Trump, for the first time the United States not sending a high-level delegation? Do you actually think that’s good or bad in terms of how they influence the outcome? And also, you are a Colombian of Palestinian descent. And your thoughts right now on what’s happening in Gaza?

SUSANA MUHAMAD : Yes, on the first one, it’s absolutely terrible the United States is not here. It is the second-biggest polluter in the world. And we, to be able — I mean, this is the problem. This problem cannot be solved by any single country. It has to be the global community together.

But the worst thing is the escalation of the conflict. Before, it used to be blocking here, as some countries are doing, but now it’s with military action. So, having the U.S. military deployment of military forces to the Caribbean have killed 80 Latin Americans, in total impunity, in the last two months, with nobody stopping that. And actually, my fear is that what President Trump is coming for is for the oil resources of Venezuela and also the minerals of Colombia and Latin and South America. If this is going to be the escalation of conflict, we actually are in a very bad shape. That’s why it will be a fantastic message from Brazil that more than 80 countries — the majority of countries said, “No, we are not going to change the agenda because there is military bullying from any powerful country right now. And we have to secure the humanity.”

And on Gaza, these are not issues that are not correlated, because they use Gaza as a laboratory, because they were able to move forward. And even President Trump declared in the Israeli parliament that the weapons were given by the U.S. to commit genocide, because this has happened, and the world, the United Nations, nobody could stop this. That’s why they have now the license, the green light, to start moving forward to pursue whatever interests they think they want to pursue. And this is why we have to strengthen the international community and multilateralism and also the resistance in the frontlines of these issues, because, I tell you something, people will not allow any military power to kill them. Even the Palestinians right now, with everything that has happened, are still resisting. And that’s actually a lesson from all of us, because if the defense of the environment will become a military conflict, we are already seeing in Palestine what will happen in the future. I don’t wish that future. I think we can actually — humanity can do better, and that we can be very proactive and productive in shifting this situation of climate crisis, rather than continue investing in arms, in armies and in defense, as the whole world is doing. So, there’s no money for climate, but then everybody, in two seconds, puts 5% of their GDP in defense.

AMY GOODMAN : Well, Susana Muhamad, we want to thank you so much for being with us, a longtime environmentalist, served as Colombia’s minister of environment and sustainable development from 2022 to earlier this year, also president of the 2024 U.N. Biodiversity Conference held in Cali, Colombia. She is of Palestinian descent.

When we come back, we’ll speak with Sudanese climate researcher Lina Yassin, who is here at COP30 supporting the least developed countries, the LDC Group. And we’ll speak with Jean Su, co-author of the new report, “Data Crunch: How the AI Boom Threatens to Entrench Fossil Fuels and Compromise Climate Goals.” We’re at COP30 in Belém, Brazil, gateway to the Amazon. Stay with us.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

"Bring the Truth Out of the Shadows": Survivors Hail Congressional Vote to Release Epstein Files

Democracy Now!
www.democracynow.org
2025-11-19 13:14:15
Congress has finally voted to compel the Justice Department to release the files on Jeffrey Epstein, the deceased convicted sex offender and power broker. After a near-unanimous vote in both legislative chambers, President Trump now says he will sign the bill into law. We play statements from a pres...
Original Article

Congress has finally voted to compel the Justice Department to release the files on Jeffrey Epstein, the deceased convicted sex offender and power broker. After a near-unanimous vote in both legislative chambers, President Trump now says he will sign the bill into law. We play statements from a press conference held by survivors of Jeffrey Epstein’s abuse, who are celebrating the long-awaited win for transparency and accountability.


Transcript

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : After months of stonewalling by House Speaker Mike Johnson, Congress finally voted Tuesday to compel the Justice Department to release the files on Jeffrey Epstein, the deceased convicted sex offender and power broker. The House vote was unanimous, except for one. That’s 427 to 1 in favor of releasing the files. Republican Congressman Clay Higgins of Louisiana was the lone “no” vote.

Several survivors of Epstein’s abuse were seated in the gallery during the vote and embraced, cheering when it passed.

The Senate then voted unanimously to pass the House bill. It’s a stunning outcome after months of fierce opposition by President Trump and the Republican Party. Trump reversed course over the weekend when he saw too many defections, and said he would sign the bill.

Ahead of the vote, survivors of Jeffrey Epstein’s abuse held a news conference on Capitol Hill. These are some of their voices.

LISA PHILLIPS : My name is Lisa Phillips. It’s an honor to stand here again for something America is finally united on: the immediate release of the entire Epstein files. In a divided nation, this is one demand we all share. …

So, today, we are launching something historic: the first national survivor-led political movement in America, nonpartisan, laser-focused on exposing the systems, the loopholes, power structures and silencing mechanisms that have protected predators for far too long. We are stepping directly into the halls of power, into the political arena. We will help rewrite laws that failed us, and build protections for our nation’s children, together, targeted by sexual exploitation. Together today, survivors begin our own fight: the survivor revolution. And we intend to change this nation for the better.

JENA - LISA JONES : Hello, everybody. I am Jena-Lisa Jones. … This was me at 14 years old. I was a child. I was in ninth grade. I was hopeful for life and what the future had held for me. He stole a lot from me by — at 14. …

Sexual abuse is not a Republican issue. It is not a — or a Democratic issue. It is not a — it is also not a hoax. We are here as American survivors of a man who used his wealth and power to hurt young girls and women. The world should see the files to know who Jeffrey Epstein was and how the system catered to him and failed us. Emotionally, this process has been distressing. First, the administration said it would release everything, and applauded President Trump for that. Then it fought to release nothing.

ANNIE FARMER : Good morning. My name is Annie Farmer, and this is a photo of me and my sister Maria Farmer around the time I was 16 and she was 25. That’s how old we were when we were abused by Epstein and Maxwell. …

This is not an issue of a few corrupt Democrats or a few corrupt Republicans. This is a case of institutional betrayal. Because these crimes were not properly investigated, so many more girls and women were harmed. My sister, because of her bravery, was repeatedly threatened and lived in fear, with dire consequences for her health and her career. Thirty years later, even as oceans of allegations and obvious truths have emerged, the government has still not chosen transparency. This is why we have all come together as one united voice to demand the release of all the Epstein files and to finally bring the truth out of the shadows.

AMY GOODMAN : Annie Farmer’s sister Maria Farmer first reported Jeffrey Epstein’s abuse to the FBI in 1996, nearly 30 years ago. The survivors of Epstein and Ghislaine Maxwell’s abuse were speaking at a Capitol Hill news conference ahead of the House and Senate votes to force the DOJ to release the Epstein files, unanimous in the Senate, only one dissenting vote in the House.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.


Next story from this daily show

Susana Muhamad, Ex-Colombian Environment Minister, on COP30 Talks, Trump, Gaza & More

Meet ShinySp1d3r: New Ransomware-as-a-Service created by ShinyHunters

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 13:01:09
An in-development build of the upcoming ShinySp1d3r ransomware-as-a-service platform has surfaced, offering a preview of the upcoming extortion operation. [...]...
Original Article

Hacker with a smiley face

An in-development build of the upcoming ShinySp1d3r ransomware-as-a-service platform has surfaced, offering a preview of the upcoming extortion operation.

ShinySp1d3r is the name of an emerging RaaS created by threat actors associated with the ShinyHunters and Scattered Spider extortion groups.

These threat actors have traditionally used other ransomware gangs' encryptors in attacks, including ALPHV/BlackCat , Qilin , RansomHub , and DragonForce , but are now creating their own operation to deploy attacks themselves and their affiliates.

Wiz

News of the upcoming RaaS first came to light on a Telegram channel, where threat actors calling themselves "Scattered Lapsus$ Hunters," from the names of the three gangs forming the collective (Scattered Spider, Lapsus$, and ShinyHunters), were attempting to extort victims of data theft at Salesforce and Jaguar Land Rover (JLR) .

The ShinySp1d3r encryptor

BleepingComputer discovered a sample of the ShinySp1d3r after it was uploaded to VirusTotal . Since then, additional samples have been uploaded, allowing researchers to analyze the upcoming ransomware encryptor.

Note: While some of our images show the name as 'Sh1nySp1d3r,' BleepingComputer has been told that the RaaS is operating under ShinySp1d3r and the name will be changed in future builds.

The encryptor is developed by the ShinyHunters extortion group, which is building it from scratch, rather than utilizing a previously leaked codebase like LockBit or Babuk.

The ShinySp1d3r ransomware encryptor
The ShinySp1d3r ransomware encryptor
Source: BleepingComputer

As a result, the ShinySp1d3r Windows encryptor offers many features, some common to other encryptors and others not seen before.

According to analysis shared with BleepingComputer by analysts at ransomware recovery firm Coveware , these features include:

  • Hooking the EtwEventWrite function to prevent data from being logged to the Windows Event Viewer.
  • Kills processes that keep a file open and prevent it from being encrypted by iterating over processes with a handle to the file, then killing them. The encryptor also has a 'forceKillUsingRestartManager' function that uses the Restart Manager API , but it is not implemented yet.
  • Fills free space on a drive by writing random data into files called 'wipe-[random].tmp'. This is done to overwrite any deleted files, making them more challenging, if not impossible, to recover.
  • Kills a hard-coded list of processes and services.
  • Checks available memory to calculate the optimal amount of data to read at a time.
  • Contains the ability to propagate to other devices on the local network through one of these methods:
    • deployViaSCM - Creates a service to run the malware
    • deployViaWMI - Runs the malware via WMI with Win32_Process.Create
    • attemptGPODeployment - Creates a GPO startup script in scripts.ini to run the malware
  • Contains anti-analysis features and overwrites the contents of a memory buffer to prevent forensic analysis.
  • Deletes Shadow Volume Copies to prevent them from being used to restore encrypted files.
  • Searches for hosts with open network shares and attempts to encrypt them.
  • Encrypts files with different chunk sizes and offsets. It is unclear why it does that, or whether this information is stored in an encrypted file header (more about that later).

When encrypting files, the ransomware uses the ChaCha20 encryption algorithm with the private key protected using RSA-2048. Each file will have its own unique extension as shown in the folder below, which ShinyHunters claimed to BleepingComputer was based on a mathematical formula.

Folder encrypted by ShinySp1d3r ransomware
Folder encrypted by ShinySp1d3r ransomware
Source: BleepingComputer

Each encrypted file contains a file header that begins with SPDR and ends with ENDS , as shown in the image below. This header contains information about the encrypted file, including the filename, the encrypted private key, and other metadata.

Files encrypted by ShinySp1d3r ransomware
Files encrypted by ShinySp1d3r ransomware
Source: BleepingComputer

Every folder on the encrypted device will contain a ransom note, currently hardcoded to R3ADME_1Vks5fYe.txt , that includes information on what happened to a victim's files, how to negotiate the ransom, and a TOX address for communications.

The ransom note also includes a link to the Tor data leak site, but currently has a placeholder onion URL that is not valid.

"This communication has been issued on behalf of the ShinySp1d3r group. It is intended exclusively for internal incident response personnel, technical leadership, or designated external advisors," begins the ransom note.

"A critical encryption event has taken place within your infrastructure. Certain digital assets have become inaccessible, and selected data was securely mirrored. The goal of this message is not disruption, but to provide your team with a confidential opportunity to resolve the situation efficiently and permanently."

ShinySp1d3r ransom note
ShinySp1d3r ransom note
Source: BleepingComputer

The ransom note goes on to say that victims have three days to begin negotiations before the attack is made public on the data leak site.

In addition to the ransom notes, the encryptor will also set a Windows wallpaper that warns the victim of what happened and urges them to read the ransom note.

ShinySp1d3r wallpaper
ShinySp1d3r wallpaper
Source: BleepingComputer

While BleepingComputer only obtained the Windows encryptor, ShinyHunters says they have completed a CLI build with runtime configuration and are close to finishing versions for Linux and ESXi. They also said that a separate "lightning version" is in development, optimized for speed.

"We're also working on a "lightning version" pure ASM, its like lockbit green - another windows locker variant but in pure assembly and its pretty simple,” ShinyHunters told BleepingComputer.

As this is a debug build of an in-development ransomware, we will likely see additional features added in the future.

As for the RaaS operation itself, ShinyHunters says it will be run by their group under the Scattered LAPSUS$ Hunters name.

"Yes, it will be lead by me/us 'ShinyHunters' but operated under the Scattered LAPSUS$ Hunters (SLH) brand, hence the name ShinySp1d3r, to demonstrate the 'alliance' or 'cooperation' between these groups," ShinyHunters told BleepingComputer.

The threat actor also claims that any company in the healthcare sector, including pharmaceutical companies, hospitals, clinics, and insurance firms, cannot be targeted with their encryptor. However, BleepingComputer has been told this by other ransomware gangs in the past, many of whom later allowed those policies to be violated.

Similar to other ransomware operations, ShinyHunters says attacks against Russia and other CIS countries are prohibited, as many affiliates will come from those regions and could become targets of law enforcement.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Your Smartphone, Their Rules: App Stores Enable Corporate-Government Censorship

Hacker News
www.aclu.org
2025-11-19 13:28:40
Comments...
Original Article

,
Senior Staff Technologist ,
ACLU Speech, Privacy, and Technology Project

November 18, 2025

Subscribe to the Free Future Newsletter
Free Future home

Who controls what you can do on your mobile phone? What happens when your device can only run what the government decides is OK? We are dangerously close to this kind of totalitarian control, thanks to a combination of government overreach and technocratic infrastructure choices.

Most Americans have a smartphone, and the average American spends over 5 hours a day on their phone . While these devices are critical to most people’s daily lives, what they can actually do is shaped by what apps are readily available. A slim majority of American smartphone users use an iPhone, which means they can only install apps available from Apple’s AppStore. Nearly all the rest of US smartphone users use some variant of Android, and by default they get their apps from Google’s Play Store.

Collectively, these two app stores shape the universe of what is available to most people as they use the Internet and make their way through their daily lives. When those app stores block or limit apps based on government requests, they are shaping what people can do, say, communicate, and experience.

Recently, Apple pulled an app called ICEBlock from the AppStore, making it unavailable in one fell swoop. This app was designed to let people anonymously report public sightings of ICE agents. In the United States people absolutely have a First Amendment right to inform others about what they have seen government officials doing and where — very much including immigration agents whose tactics have been controversial and violent . Apple pulled the ICEBlock app at the demand of the US Department of Justice . The following day, Google pulled a similar app called Red Dot from the Google Play Store.

The DOJ’s pressuring of Apple is an unacceptable, censorious overreach. And Google’s subsequent removal of Red Dot looks like troubling premature capitulation. While some experts and activists have expressed concerns over ICEBlock’s design and development practices , those concerns are no reason for the government to meddle in software distribution. The administration’s ostensible free speech warriors are trying to shape how Americans can communicate with each other about matters of pressing political concern.

Infrastructure choices
But the government’s overreach isn’t the whole story here. The current structure of the mobile phone ecosystem enables this kind of abuse and control.

Apple’s iOS (the operating system for any iPhone) is designed to only be able to run apps from the AppStore. If Apple hasn’t signed off on it, the app won’t run. This centralized control is ripe for abuse:

  • Apple has handed the Chinese government control over what apps are available to iPhone users in China, including banning gay dating apps .
  • The corporation has used its authority over the AppStore to block a game that critiqued its labor practices .
  • Apple’s guidelines say that “‘Enemies’ within the context of a game cannot solely target a specific … government, corporation, or any other real entity.” That represents a potential for sweeping censorship of anyone who wants to use the art of games to criticize companies or otherwise advance political messages.
  • It banned the popular game Fortnite from the App Store as it was battling the gamemaker to get a bigger cut of money from user transactions.
  • In 2012 Apple rejected an app that compiled reports of highly controversial overseas drone strikes by the U.S. government during the “War on Terror.”

Unlike Apple, Google’s Android operating system has traditionally allowed relatively easy access to “sideloading”, which just means installing apps through means other than Google’s Play Store. Although most installations default to getting apps from the Play Store, the availability of sideloading means that even if Google censors apps in the Play Store, people can still install them. Even apps critical of Google can make it onto an Android device. It’s also possible to run a variant of Android without the Play Store at all, such as GrapheneOS .

Unfortunately that is all set to change with a recent Google announcement that it will block apps from “certified Android” devices (which is nearly all Android phones) unless they come from what Google calls a “verified developer.” This means that the common Android user trying to install an app will have to get Google’s blessing: does this app come from someone that Google has “verified”? How Google will decide who is allowed to be verified and who is not is still unclear. Can a developer become “unverified”?

This upcoming change is framed by Google as a security measure , but merely knowing the identity of the developer of an app doesn’t provide any security. So the only way that the “verified developer” requirement can offer security is if Google withholds “verified developer” status from people it deems bad actors. But Google’s ability to withhold that status can be abused in the same way that Apple’s AppStore lock-in is being abused. A government will simply make a demand: “treat this developer as a bad actor” and effectively cut off any app by targeting its developer.

When a lever of control is available, the would-be censors will try to use it. It has never been true that someone who buys a Lenovo or Dell laptop, for example, has to let Lenovo or Dell tell them what programs they can and cannot install on their computer. Yet that will soon be the situation with regards to nearly all cell phones used in the United States.

Note that American iPhones are limited to only apps from the AppStore, but European Union (EU) iPhones don’t have that restriction. The EU’s Digital Markets Act (DMA) required Apple to permit alternate app stores and sideloading (which Apple calls “web distribution”). As a result, marketplaces like AltStore are starting to become available — but Apple only lets EU customers use them. The European regime is not perfect, however; while sideloaded apps and alternative app stores aren’t subject to the app store’s constraints, they are still obliged to follow Apple’s “Notarization” requirements , which requires Apple to review all iOS apps – even from these alternate sources – on the basis of several vaguely worded rationales. For example, if the DoJ were to claim that ICEBlock “promoted physical harm” (even though it clearly does not), Apple could use this as an excuse to justify revoking their notarization of the app, which would prevent it from being installed even from these alternate channels.

App store security and surveillance
Both Apple and Google make claims that their app distribution mechanisms improve security for their users. And clearly, these tech giants do block some abusive apps by exercising the control they have.

But both of them also regularly allow apps that contain common malicious patterns, including many apps built with surveillance tooling that sell their users’ data to data brokers . If either tech giant were serious about user security, they could ban these practices, but they do not. Google’s security claims are also undermined by the fact that the cellphone hacking company Cellebrite tells law enforcement that Google’s Pixel phones can be hacked, while those running GrapheneOS, created by a small non-profit, cannot. (Asked by a reporter why that was so, Google did not respond .)

Making matters worse, organizations like Google are unclear about their policies, and some of their policy statements can put developers and users at risk. Discussing blocking Red Dot, for example, Google told 404Media that “apps that have user generated content must also conduct content moderation.” This implies that Google could become unwilling to distribute fully end-to-end encrypted apps, like Signal Private Messenger or Delta Chat , since those app vendors by design are incapable of reviewing user-generated content. End-to-end encrypted apps are the gold standard for secure communications, and no app store that signals a willingness to remove them can claim to put security first.

In addition, even if you’ve carefully curated the apps you have installed from these dominant app stores to avoid spyware and use strongly secure apps, the stores themselves monitor the devices, keeping dossiers of what apps are installed on each device, and maybe more . Being a user of these app stores means being under heavy, regular surveillance.

Other options exist
These centralized, surveilled, censorship-enabling app stores are not the only way to distribute software. Consider alternative app stores for Android, like Accrescent , which prioritizes privacy and security requirements in its apps, and F-Droid , which enables installation of free and open source apps. In addition to offering quality tools and auditing, F-Droid’s policies incentivize the apps distributed on the platform to trim out overwhelming amounts of corporate spyware that infest both Google and Apple’s app stores. Neither F-Droid nor Accrescent do any surveillance of their users at all.

The F-Droid developers recently wrote about the impact that Google’s upcoming developer registration requirements are likely to have on the broader ecosystem of privacy-preserving Android apps. The outcome doesn’t look good: the ability to install free and open source software on a common device might be going away. Those few people left using unusual devices (“uncertified” Android deployments like GrapheneOS , or even more obscure non-Android operating systems like phosh ) will still have the freedom to install tools that they want, but the overwhelming majority of people will be stuck with what can quickly devolve into a government-controlled cop-in-your-pocket.

How we can push back
In an increasingly centralized world, it will take very little for an abusive government to cause an effective organizing tool to disappear, to block an app that belongs to a critical dissenting media outlet, or to force invasive malware into a software update used by everyone. We need a shared infrastructure that doesn’t permit this kind of centralized control. We can disrupt oligopolistic control over software through user choice (e.g., preferring and installing free software), building good protocol frameworks (e.g., demanding tools that use open standards for interoperability), and through regulatory intervention (e.g., breaking up monopolistic actors, or mandating that an OS must allow sideloading, as the EU did with the DMA).

The device you carry with you that is privy to much of your life should be under your control, not under the control of an abusive government or corporations that do its bidding.

The Peaceful Transfer of Power in Open Source Projects

Hacker News
shkspr.mobi
2025-11-19 13:20:42
Comments...
Original Article

Most of the people who run Open Source projects are mortal. Recent history shows us that they will all eventually die, or get bored, or win the lottery, or get sick, or be conscripted, or lose their mind.

If you've ever visited a foreign country's national history museum, I guarantee you've read this little snippet:

King Whatshisface was a wise and noble ruler who bought peace and prosperity to all the land.

Upon his death, his heirs waged bloody war over rightful succession which plunged the country into a hundred years of hardship.

The great selling point of democracy is that it allows for the peaceful transition of power. Most modern democracies have rendered civil war almost unthinkable. Sure, you might not like the guy currently in charge, but there are well established mechanisms to limit their power and kick them out if they misbehave. If they die in office, there's an obvious and understood hierarchy for who follows them.

Most Open Source projects start small - just someone in their spare room tinkering for fun. Unexpectedly, they grow into a behemoth which now powers half the world. These mini-empires are fragile . The most popular method of governance is the Benevolent Dictator For Life model. The founder of the project controls everything . But, as I've said before, BDFL only works if the D is genuinely B. Otherwise the FL becomes FML.

The last year has seen several BDFLs act like Mad Kings. They become tyrannical despots, lashing out at their own volunteers. They execute takeovers of community projects. They demand fealty and tithes. Like dragons, they become quick to anger when their brittle egos are tested. Spineless courtiers carry out deluded orders while pilfering the coffers.

Which is why I am delighted that the Mastodon project has shown a better way to behave.

In " The Future is Ours to Build - Together " they describe perfectly how to gracefully and peacefully transfer power. There are no VCs bringing in their MBA-brained lackeys to extract maximum value while leaving a rotting husk. No one is seizing community assets and jealously hoarding them. Opaque financial structures and convoluted agreements are prominent in their absence.

Eugen Rochko, the outgoing CEO, has a remarkably honest blog post about the transition . I wouldn't wish success on my worst enemy. He talks plainly about the reality of dealing with the pressure and how he might have been a limiting factor on Mastodon's growth. That's a far step removed from the ego-centric members of The Cult of The Founder with their passionate belief in the Divine Right of Kings.

Does your tiny OSS script need a succession plan? Probably not. Do you have several thousand NPM installs per day? It might be worth working out who you can share responsibility with if you are unexpectedly raptured. Do you think that your project is going to last for a thousand years? Build an organisation which won't crumble the moment its founder is arrested for their predatory behaviour on tropical islands.

I'm begging project leaders everywhere - please read up on the social contract and the consent of the governed. Or, if reading is too woke, just behave like grown-ups rather than squabbling tweenagers.

It is a sad inevitability that, eventually, we will all be nothing but memories. The bugs that we create live after us, the patches are oft interrèd with our code. Let it be so with all Open Source projects.

Headlines for November 19, 2025

Democracy Now!
www.democracynow.org
2025-11-19 13:00:00
Congress Overwhelmingly Passes Legislation Compelling DOJ to Release Epstein Files, Trump Calls for ABC’s Broadcast License to Be Revoked, Trump Defends MBS over 2018 Murder of Jamal Khashoggi, Hamas Rejects U.S.-Backed U.N. Plan to Place Gaza Under International Stabilization Force, Israel La...
Original Article

Headlines November 19, 2025

Watch Headlines

Congress Overwhelmingly Passes Legislation Compelling DOJ to Release Epstein Files

Nov 19, 2025

Congress overwhelmingly passed legislation Tuesday to compel the Justice Department to release all files related to the late convicted sex offender Jeffrey Epstein. In the House, the vote was nearly unanimous at 427-1, with Republican Congressmember Clay Higgins of Louisiana being the only lawmaker to vote no. Just hours later, the Senate unanimously passed the House bill. President Trump had opposed the bill for months but dramatically reversed course over the weekend, when he encouraged Republicans to support the measure. He has vowed to sign the bill into law. Epstein and Trump were once close friends. On Monday, Epstein’s brother Mark told NewsNation, “Jeffrey definitely had dirt on Trump.” Several Epstein survivors sitting in the House gallery cheered and embraced one another as the final vote tally was read. We’ll hear from the survivors who spoke ahead of the House vote after the headlines.

Trump Calls for ABC’s Broadcast License to Be Revoked

Nov 19, 2025

While hosting Saudi Crown Prince Mohammed bin Salman at the White House yesterday, President Trump called for ABC’s broadcast license to be revoked, after ABC News White House correspondent Mary Bruce asked him why he had not released the Epstein files.

Mary Bruce : “Mr President, why wait for Congress to release the Epstein files? Why not just do it now?”

President Donald Trump : “You know, it’s not the question that I mind. It’s your attitude. I think you are a terrible reporter. It’s the way you ask these questions. … I think the license should be taken away from ABC , because your news is so fake, and it’s so wrong. And we have a great commissioner, the chairman, who should look at that, because I think when you come in and when you’re 97% negative to Trump and then Trump wins the election in a landslide, that means obviously your news is not credible, and you’re not credible as a reporter.”

Trump Defends MBS over 2018 Murder of Jamal Khashoggi

Nov 19, 2025

President Trump defended Prince Mohammed bin Salman when ABC News White House correspondent Mary Bruce asked about the Saudi crown prince’s involvement in the 2018 murder of Washington Post opinion columnist Jamal Khashoggi.

President Donald Trump : “As far as this gentleman is concerned, he’s done a phenomenal job. You’re mentioning somebody that was extremely controversial. A lot of people didn’t like that gentleman that you’re talking about. Whether you like him or didn’t like him, things happen. But he knew nothing about it. And we can leave it at that. You don’t have to embarrass our guest by asking a question like that.”

Trump’s comments contradict a U.S. intelligence report which found that Prince Mohammed bin Salman ordered Khashoggi’s killing. Back in 2018, Khashoggi was lured into the Saudi Consulate in Istanbul, where a 15-person team led by a close associate of Prince Mohammed drugged, murdered and dismembered Khashoggi with a bone saw. Following the press conference, Robert Weissman, co-president of Public Citizen, issued a statement saying, “Trump’s shameful and disgusting comments about the assassination of Jamal Khashoggi cannot be separated from Trump’s personal business interests with the Saudi regime. Trump and his family are receiving, at a minimum, tens of millions annually from branding deals with Saudi Arabia — payments for doing nothing more than permitting their name to be attached to Saudi projects.”

On Tuesday night, Trump held a black-tie dinner for MBS at the White House. Dozens of CEOs attended, including Elon Musk, Amazon’s Jeff Bezos and Apple CEO Tim Cook.

Hamas Rejects U.S.-Backed U.N. Plan to Place Gaza Under International Stabilization Force

Nov 19, 2025

Hamas and other factions inside Gaza are rejecting the U.S.-backed U.N. plan to place Gaza under the control of a U.S.-led board and an international stabilization force. In a statement, the Palestinian groups likened the plan to a “deep international partnership in the war of extermination waged by the [Israeli] occupation against our people.” This is Hamas spokesperson Hazem Qassem.

Hazem Qassem : “This resolution fully adopts the Israeli position and completely ignores the Palestinian position and the interests of our Palestinian people here in the Gaza Strip. Netanyahu does not want to continue with the ceasefire agreement, but rather wants to impose his vision on the Gaza Strip and the entire region.”

This comes as Israel continues to carry out airstrikes in Gaza. Officials in Gaza say Israel has killed 279 Palestinians since the ceasefire came into effect in October.

Israel Launches Airstrike on Palestinian Refugee Camp in Lebanon, Killing 13 People

Nov 19, 2025

In Lebanon, Israel launched an airstrike on a Palestinian refugee camp Ein el-Hilweh, killing 13 people and wounding several others. Another Israeli attack targeted a vehicle in southern Lebanon, killing one person. Established in 1948, Ein el-Hilweh is the largest of 12 Palestinian refugee camps in Lebanon and is home to at least 64,000 people. Tuesday’s attack is the latest violation of Israel’s ceasefire with Hezbollah, which went into effect a year ago. According to the U.N., Israel has been targeting Hezbollah in near-daily attacks since the ceasefire, resulting in at least 100 civilian deaths.

Trump Threatens Strikes on Drug Cartels Inside Mexico and Colombia

Nov 19, 2025

President Trump has voiced support for U.S. attacks inside Mexico and Colombia. This comes as Trump is refusing to rule out sending in ground troops to Venezuela. The New York Times reports Trump has also signed off on CIA plans for covert measures inside Venezuela, possibly to prepare the battlefield for a wider war. However, Trump has also reportedly reopened back-channel communications with the government of Venezuelan President Nicolás Maduro. The U.S. has recently amassed over 15,000 troops in the region and bombed over 20 boats in the Caribbean and the eastern Pacific. The U.S. has claimed the boats were carrying drugs, but no proof has been offered. On Tuesday, Mexican President Claudia Sheinbaum responded to Trump’s threats.

President Claudia Sheinbaum : “We’ve said this with the State Department, with Marco Rubio, and they have understood, so much so that the understanding we have with them is one of collaboration and coordination. And the first points make very clear the respect for sovereignty, respect for our territoriality, and that there is collaboration and coordination without subordination.”

Federal Agents Arrest More Than 200 Immigrants in Charlotte, North Carolina

Nov 19, 2025

Image Credit: The Charlotte Observer

Hundreds of protesters rallied in Raleigh, North Carolina, on Tuesday to protest new federal immigration raids in the city. This comes as federal agents continue to target Charlotte, where agents have arrested more than 200 immigrants since this weekend. The Department of Homeland Security has also announced plans to send 250 federal border agents to conduct immigration sweeps in Louisiana and Mississippi.

Federal Court Rules Texas Cannot Use New Congressional Map for the 2026 Midterm Elections

Nov 19, 2025

A federal court ruled that Texas cannot use its recently passed congressional map for the 2026 midterm elections and will instead have to rely on a previous map from 2021. Texas Republican state legislators over the summer pushed for a new congressional map that could garner the GOP five additional House seats. In his ruling striking down the new congressional map, Trump-appointed Judge Jeffrey Brown wrote, “The public perception of this case is that it’s about politics. To be sure, politics played a role in drawing the 2025 Map. But it was much more than just politics. Substantial evidence shows that Texas racially gerrymandered the 2025 Map.” Texas Attorney General Ken Paxton vowed to ask the U.S. Supreme Court to reverse the ruling.

Texas GOP Governor Abbott Declares Council on American-Islamic Relations a Foreign Terrorist Organization

Nov 19, 2025

Image Credit: CAIR

Texas Republican Governor Greg Abbott declared Tuesday the Council on American-Islamic Relations ( CAIR ), the country’s largest Muslim civil rights group, a foreign terrorist organization. In a statement, Governor Abbott said CAIR’s goal was “to forcibly impose Sharia law and establish Islam’s mastership of the world.” Abbott also designated the Muslim Brotherhood as a foreign terrorist organization and prohibited both groups from acquiring property in the U.S. Robert McCaw, CAIR’s director of government affairs, responded to Abbott in a letter, saying, “You do not have the authority to unilaterally declare any Americans or American institutions terrorist groups. Nor is there any basis to level this smear against our organization.”

Trump Administration Takes Steps to Dismantle the Department of Education

Nov 19, 2025

The Trump administration has taken more steps to dismantle the Department of Education by shifting several key programs to other federal agencies. In one move, the Office of Elementary and Secondary Education will be transferred to the Labor Department. Becky Pringle, the president of the National Education Association, criticized the moves. She said, “Donald Trump and his administration chose American Education Week, a time when our nation is celebrating students, public schools, and educators, to announce their illegal plan to further abandon students by dismantling the Department of Education.”

Brazil’s Supreme Court Sentences Military Officers Over Plot to Kill Lula

Nov 19, 2025

Brazil’s Supreme Court has sentenced high-ranking military officials and a federal police officer to 24 years in prison after finding them guilty for an assassination attempt against President Luiz Inácio Lula da Silva. It follows a similar ruling back in September against former President Jair Bolsonaro, who was sentenced to 27 years and three months in prison for a coup attempt against President Lula. Earlier this month, Brazil’s Supreme Court unanimously rejected Bolsonaro’s appeal challenging his prison sentence.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Geothermal's Time Has Come

Hacker News
www.economist.com
2025-11-19 12:38:44
Comments...

How do the pros get someone to leave a cult?

Hacker News
www.theguardian.com
2025-11-19 12:31:54
Comments...
Original Article

W hen the phone rings at Patrick Ryan and Joseph Kelly’s home in Philadelphia , chances are the caller is desperate. One couple rang because their son was about to abandon his medical practice to follow a new-age guru in Spain. Another call came from a husband whose wife was emptying their life savings for a self-proclaimed prophet in Australia. Yet another family phoned about their niece, who was in a relationship with a man stealing from her, maybe drugging her, probably sexually assaulting her.

These families had tried everything else. When nothing worked, they heard there were two men in Philadelphia who might still be able to bring their loved one home.

What Ryan and Kelly do is unusual: they help people leave cults. Over the past 40 years, they have handled hundreds of cases – some simple and local, others stretching across borders and decades. They have been hired by families of both modest and considerable means. They say they have even been hired by government agencies, and that some cults they have investigated have left them genuinely afraid for their lives.

Although many people are involved in cultic studies and education, fewer than 10 people in the US do anything like what Ryan and Kelly do. And among those, only Kelly and Ryan practice their strange and unique method: embedding themselves in families’ lives, pulling on threads like marionettists, sometimes for years.

Their method goes something like this. A family reaches out about their daughter, husband, nephew or grandchild. Ryan and Kelly conduct an assessment that can take anywhere from a day to a week (they would not say exactly). They charge $2,500 for the assessment, then $250 an hour after that, interviewing the family until they understand the dynamics well enough to devise a strategy. Then, over months or sometimes years, they work to create the conditions in which a person might begin to question the beliefs their life has been built on.

Normally, Kelly and Ryan work by strengthening the existing relationships in a person’s life. It can be a long game. They will educate the family about the cultic group, and give advice about what to say (or not to say). They will bring in experts: psychiatrists, lawyers, priests that can provide perspective and counsel. The goal is to untangle the family dynamics that might have made someone vulnerable to a cult in the first place.

Very occasionally, they meet face to face with the person involved in a cult. But these encounters look nothing like a drug intervention, with friends gathered in a circle and the reason for the meeting laid bare. Instead, Ryan and Kelly will act covertly. In one case, a son (the cult member) came home for a few days. His parents told him that Ryan and Kelly were friends of theirs, “family mediators” who happened to be “in town for a few days, to meet with some colleagues” – both technically true. The pair made sure to “forget” a book at the family home, and return the next day to collect it, as they began to build rapport.


I met Kelly and Ryan at their place in south Philadelphia, a three-story house they share with a big dog named Kenny and a bright green parrot named Greta.

Greta was a consolation prize Ryan bought for himself after a failed intervention, the second he ever attempted. It was the 1980s and his client, a woman who had recently finished her master’s at a prestigious university, had been drawn into a scam job. It was essentially a pyramid scheme built around a health regimen. Before you could sell it, you had to try it, so you knew what you were selling.

The regimen? Multiple enemas a day. “It escalated to 40 to 60 enemas a day,” Ryan said. “And when you do that many enemas, it upsets the electrolyte balance in your body and you begin hallucinating.”

He spent three days trying to reason with her, but she would not budge. Ryan asked himself: what value do I have if I can’t even talk someone out of an enema cult? Frustrated, he went for a walk, saw a bird in a pet shop window who said: “Hello, hello.” He put her in his coat, fashioned a small cage, took her on an airplane and brought her home.

a man with a parrot on his shoulder
Joseph Kelly with their pet parrot, Greta. She was a consolation prize Ryan bought for himself after a failed intervention. Photograph: Nic Neves

Their approach has changed a lot since those early interventions.

First, they are careful with language. They don’t love the word cult. They say it’s a cudgel: too blunt an instrument to get at the heart of the problem. Also, even if a client leaves a group and returns home, Ryan and Kelly wouldn’t say they “got them out”. They describe themselves as mediators who build bridges through which families can reach their loved ones. Sometimes, the person crosses that bridge. Sometimes, the outcome is more complicated.

Second, they have worked hard to distance themselves from “deprogramming” – the practice most people associate with cult interventions. In the 1970s and 80s, deprogramming could involve kidnappings, involuntary confinement and even violence. In one case Kelly mentioned, a cult member was held at gunpoint. It was controversial, and its effectiveness was questionable. “That,” Ryan said more than once, “is not what we do.”

Nowadays, they focus more on helping someone reach their own informed conclusion about the group they are part of, trying to soften the obstacles that might cloud their judgment.

For instance: one of the tricky parts, they explained, is communicating with a person who has been given tools to block out other people’s perspectives. This set of tools or ideas is what Ryan and Kelly call a group’s “gatekeeper”.

Ryan gave me an example. One client came from an extremely rigid, orthodox Catholic family. The family had a plan for life: retire early, save well, put the kids through college. But against these goals, the wife had joined an eastern religious group and was donating thousands of dollars to it. She had quit her job, and the marriage was collapsing.

The gatekeeper, Ryan and Kelly decided, was that the woman perceived her spouse “as dogmatic, fundamentalist – but not spiritual”. They needed to change her mind about her husband.

So Ryan called an old friend of Kelly’s, a Jesuit priest who lived in a parish near the family’s home. Ryan asked the priest to meet the husband. The two men became friends and agreed to meet regularly – all according to Ryan and Kelly’s plan. Every so often, the husband would text his wife: “I’m coming home late tonight, meeting my priest friend.”

“She’s like, ‘What priest friend?’” Ryan said.

After a few months, the wife became curious enough to want to meet her husband’s new friend. The priest, who was genuinely thrilled, nearly veered off plan by offering to speak with her directly. He believed she was ready to hear his views on spirituality. But Ryan stopped him: “I told him, look, they hired us to be strategists. I have a strategy for this.”

Ryan mapped out the parish and planned a tour. He made sure the route passed through the library specifically, the section with many eastern religious books. “You’re gonna go through there,” Ryan told the priest.

On a Friday, the husband brought his wife along to visit. The priest greeted them warmly and showed her the grounds. They walked through the library. She saw the books.

Soon, the priest was coming over for barbecues. They all became friends. And she began openly talking with her husband about the group she was involved in: the good and the bad. They had passed the group’s gatekeeper. But the work was not finished.

All groups have a rhythm, like a pulse across the calendar year. We have holidays, and we have tax season. There are highs and lows. If you want to talk to someone about how dangerous their group is, you probably do not want to do it right after they have taken ayahuasca or gone on retreat. But the lows come just as reliably.

When the wife finally started to complain about the group, the husband called Ryan: “She’s going to leave!” But Ryan told him firmly: “No, she’s not. Don’t push it.”

By the third cycle, the third low point, when she was sleep deprived, working long hours and truly miserable, Ryan gave the husband a single line. “Just say to her this: ‘You gave it a good shot.’ And nothing more.”

“She said: ‘Yeah, I have. Will you help me get my stuff?’ And he said: ‘OK.’”

The whole time, the wife knew her husband had consulted Ryan and Kelly, though she did not know they had orchestrated his friendship with the priest. During the five years they worked on the case, she assumed they were anti-religious bad actors. A few months after she left the group, she met Ryan and Kelly for the first time.

In Ryan’s telling, she loved chatting with Kelly and himself because they so clearly understood what she appreciated about the group. But they also saw that she was being made to sleep only a few hours a night, drink toilet water, and work hundreds of hours recruiting members for a guru accused of sexual misconduct and labor law violations.


Ryan and Kelly started doing this work because when they were younger, they themselves had been in what would be described as cults. They were Transcendental Meditation (TM) instructors in the 70s and 80s. After about a decade with TM, they felt disturbed by their relationship to the organization, and they sued – Kelly in 1986, and Ryan in 1989 – for negligence and fraud. Kelly joined a suit as a Doe along with six others, claiming the organization had “fraudulently promised that the practice … would confer certain personal and societal benefits”, which never materialized. Ryan says that during the course of his TM training he was constantly surveilled and led to believe that he would be able to levitate and save humanity.

The case Kelly joined, which dragged on for several years, included expert testimony from clinical psychologist Margaret Singer , a brainwashing specialist who had previously assessed Charles Manson. Neither case won, but their lawsuits eventually settled, and through the course of the litigation, Ryan and Kelly left the organization. (TM did not respond to a request for comment; however, Bob Roth, CEO of the TM-associated David Lynch Foundation, did let me know the American Heart Association recently named Transcendental Meditation an official stress reducer for treating high blood pressure.)

Kelly joined another group after leaving TM. He followed his new guru for five more years. Meanwhile, Ryan told me he got busy investigating and trying to expose cults, including the group Kelly had joined. In those early days, Ryan considered himself a sort of “cult fighter”, with a much more black and white view of what cults were and what it meant to be a part of one. They finally started working together when Kelly had a falling out with his second group, whose guru was eventually convicted for child sexual abuse.

A man shows highlighted lines from a book in his home library while another man looks on
Ryan holds William S Kroger’s 1976 book called Hypnosis and Behavior Modification: Imagery Conditioning. Photograph: Nic Neves

They have had a close relationship ever since, working and living together with their dog and bird in a big house they told me was once used as a base of operations by the Philly mafia, which seems oddly fitting. They mostly prefer to keep details about their personal lives off-record. Often, the families they work with need to hear very hard things, and being a sort of blank slate makes it easier for them to be whoever their clients need them to be.

Throughout reporting this piece, privacy was an issue. Ryan and Kelly told me many more details about their cases off the record. All these cases are anonymized, with some crucial details changed, to protect the identities of their clients and their families. Furthermore, Kelly and Ryan urge their clients not to speak with the media. The firmest “no” I ever got was when I asked Ryan if I could speak to a former client. The second was when I asked if they could show me emails or letters to prove they had worked with government agencies. This made it difficult to verify all the details of their stories, though I found the situations they described were consistent with other accounts of ex-members from cults they say their clients were a part of. When cases did make it to court, the details Ryan and Kelly provided me matched the legal testimony I found.

But without being able to speak to their former clients, some of the stories told here remain just that: stories in the telling of Ryan and Kelly. I was, however, able to speak with many of their collaborators, who confirmed that they had seen Ryan and Kelly’s method work close up. One of the people I spoke with, Dr Janja Lalich, is a professor emerita of sociology at California University State, Chico and author of multiple books on cults including Bounded Choice: True Believers and Charismatic Cults. Lalich lectures and consults on cultic studies, and regularly testifies as a cult expert in court cases internationally. She started studying them because she, too, joined and left a cult when she was younger. It was a radical Marxist-Leninist cult that eventually “imploded”; a process she details in her book. The members collectively overthrew the leadership and all left at the same time, she explained, “which was great”.

Lalich worked on a couple of cases with Kelly and Ryan in the 90s, when they were starting out. She did not like the work. She found it stressful and difficult, and felt some reservations about the way the process interfered with people’s lives. But the three of them have remained close over the years and still collaborate in the broader cult-awareness space, attending conferences and teaching workshops. She confirmed for me a lot of the claims Kelly and Ryan made about the cults they have dealt with, including the idea that most people who join cultic groups leave on their own.

Ryan concedes that their work can look a lot like meddling in someone’s life. But he is also firm in that they are not “hired hitmen”. They work with psychologists, psychiatrists and social workers to provide oversight, several of whom I spoke with for this piece. “You can’t just interfere with someone’s life because you don’t like what they’re doing,” Ryan told me. When Kelly and Ryan take on a case, it’s because there is some dynamic in the family system that they think their expertise can help untangle. In every case, the group in question is offering something to the person involved that the family might not be able to understand or appreciate. But to Ryan and Kelly, this appreciation is exactly the point.


One of their cases in the 90s involved a cult leader who was systematically sexually assaulting the group’s members. “I can’t get into all the details,” Ryan said. “He was horrible, a horrible man.” Ryan and Kelly had been flying regularly to Australia to work on the case. The client’s niece, a girl in the group, was beginning to fall out with the cult. The leader had been arrested and was on trial for crimes related to the cult’s activities.

In their process, Ryan and Kelly require what they call 50 things: “You have to find 50 things that you could agree with the person on.” Ryan gestured to a painting on the wall in their living room. It was a strange, surrealist-looking canvas with a big Tesla coil in the center and lightning shooting out at some pigeons. Ryan said, “If you look at this piece of art and say, ‘That’s really ugly,’ then we’re going to start off … not on the right page, right?”

But if I could appreciate what he found appealing, then, he said: “I think you have the right to criticize it.” The number may seem arbitrary, but their goal is to find 50 things a family can appreciate about a cult before discussing what they do not agree with.

I put this number to Lalich and she said the notion of having to find 50 things seemed a bit extreme. “ I certainly could never find 50 things about my cult that I thought were good.” The spirit of it seemed right to her though, at least: that the family needs to tone down their rhetoric, or they will just push the cult-involved member away.

a man sitting down
Ryan sits in front of a painting of a Tesla coil and pigeons in his living room. Photograph: Nic Neves

In Kelly and Ryan’s case, the girl’s uncle, their client, had a very difficult time finding anything positive about the group or the leader who had allegedly raped his niece. When the trial came, the uncle wanted to testify against the leader, and Ryan and Kelly told him not to. “We said, if you testify, your niece … will cut you off.”

The uncle went to court anyway. Just as Ryan had predicted, the niece fell off the map entirely. She was scared they would kidnap her – try to deprogram or threaten her. Ryan and Kelly pulled some strings to find out that she had done some traveling, but otherwise, for “20 years”, Ryan said, “they didn’t know if she was alive or dead.”

On Ryan and Kelly’s counsel, the family made a social media account in the 2010s to post information about the family: weddings, births, etc. After nearly 30 years, the girl, now in middle age, finally reached out. The family had posted about how the grandfather was getting old, and she called to say she wanted to see him before he died.


Much has been written about the psychology of cults, the archetypes of cult leaders and the way they can create tragic , abusive conditions for their members. In just the past few years there have been Christian sects convicted of manslaughter of children , doomsday groups killing police officers , and starvation cults with bodies piled in mass graves . While Lalich says that to her, it is pretty clear what is or is not a cult, she also concedes that groups exist on a broad continuum ranging from extremely dangerous to “more or less” benign. She does not think that there is such a thing as a “harmless” cult – since all these groups exert some measure of coercion and manipulation. But for Ryan and Kelly, defining precisely what is or is not a cult is actually counterproductive, since so much of what they do is appeal to the person inside the cult who they are trying to reason with.

So, rather than labeling a group as a cult, Ryan and Kelly focus on “cultic relationships” that exist between a member and an organization. “Ten million people have learned Transcendental Meditation,” Ryan clarified. “Ten million people are not in a cult.” His voice rose and he shrugged. “I mean, they’ve been lied to. As a teacher, we lied to them. We told them things that were just absolutely not true.”

“Bonkers,” Kelly added from his rocking chair.

“Bonkers,” Ryan confirmed.

Over the course of their careers Ryan and Kelly have found that in order to mediate people’s relationships with these groups, they have to gain a better understanding of how they are drawn in to begin with. How is it that a cult leader can make a person seriously believe that they can levitate, or that drinking toilet water is acceptable? They have to understand how exactly a group manages to shake people’s fundamental assumptions about reality.

For example, Kelly described a case in which a leader would command people to have sex with one another: “‘You, woman, sleep with that woman.’ ‘You, sleep with that man.’” Even if participants were straight, the leader would ask them: “What is your limitation?” This is an archetype of cult leader that Kelly calls a “crazy adept”: “the disruptor, who comes in and destroys the norms in order to build up a better, purer reality.”

One of their close collaborators, Ashlen Hilliard, told me about a harrowing case whose details she preferred to keep tightly under wraps. She said they were referred to the case by a US government agency investigating the group, and it had proved extremely dangerous. If they were publicly known to be helping members leave, the group could retaliate. “I care about this,” Ryan said of this interview, “but I care more about not dying.”

Two men sit on a black couch in a house
Ryan: ‘The only way [some questions] can be answered, in my mind, is by a feeling. And, that feeling is so easily manipulated.’ Photograph: Nic Neves

Hilliard explained that in this group, words like “victim” were twisted out of shape. “Instead of assigning a negative meaning to a word like ‘victim’, they say: this is a word that indicates a badge of honor.” Then, when a member was subject to sexual violence or other abuse by the group, being a “victim” was reframed as something positive. Often, people in these groups have experienced past trauma, and this destabilization of the concept of victimhood can feel freeing – at least initially.

What Kelly and Ryan mean when they say these groups are “offering something” to people, it is exactly that. There is a hole a group fills: alienation from community, family, sexuality; pressure to follow a certain life plan, addiction, unrealized spirituality, economic catastrophe – all reasons to join a group. We all have deep pains that make us hope that maybe, if the world were different, we wouldn’t feel the way we do.

Part of why their work is so necessarily confidential is that there is always the possibility a person will go back to their group. These are people trying to make sense of a reality whose fundamental rules have been turned on its head. When is anyone ever “done” making sense of things, anyway?

Kelly still thinks about a moment with the guru he followed after leaving Transcendental Meditation, back in 1985. He had been meditating at the feet of the guru, Prakashanand Saraswati (who they called Swami-ji, or “guru”), for several days. When he looked up, he saw the Swami surrounded by “a golden light.” He was not seeing an illusion. It was a real experience, built on ideas and promises laid out by the guru: a supreme, divine, transcendent love. “The wave merging into the ocean,” Kelly said.

After that experience, Kelly felt Swami-ji could do no wrong. For the next three years, even when he saw the women visiting Swami-ji’s bedroom, the demands for thousands of dollars, the outbursts of rage; it all felt insignificant, or easily dismissed.

For that reason, Kelly and Ryan are not looking to convince people of any particular version of reality or truth. They do not seem to be interested in truth at all, really. When you use your experience to test whether or not something is true (the holiness of a guru, the righteousness of a cause) then, Ryan told me: “The person who gives you that experience will own you.” Their work is to usher people into a state of skepticism about the conclusion they have drawn from their experiences; beginning to open them up to the idea that individual experience is not the same as truth or reality.

This lighter touch approach is controversial. While interviewing people in the broader cult-awareness network, I found that Ryan and Kelly had drawn some criticism for affiliating with a certain group of academics that some people in their sphere disparage as “cult apologists”. This group belongs to a branch of cultic study that, like Ryan and Kelly, avoid the term “cult”, preferring the term “New Religious Movement”. Kelly and Ryan have consulted these academics over the years and have kept some as trusted contacts. Lalich and others say these apologists undermine survivors’ efforts to hold cults accountable for their abuses, by brushing over the harms such as child neglect and sexual abuse committed by groups like the Children of God (The Family International) or the Unification Church, even testifying in court on a cult’s behalf. It’s a bitter, complicated split in the field of cultic study, but these academics say, among other things, that they are speaking out for freedom of religion. When Ryan and Kelly mentioned these apologists, they said they understood Lalich’s criticism, but that there was a way in which they could see things “through their lens”.

Ryan and Kelly are not cult apologists, but in order to do their work they have had to keep an open mind. They neither fully endorse cults’ rights to exist, nor consider groups as bad per se. They arrive from as ideologically empty a place as they can, a skeptical place that is neither here, nor there. Doing work like this, the big question of epistemology, of what we can know and what to believe, become everyday practical quandaries.

“I just know what is not real,” Ryan told me once. Take even the broadest existential question: what are we doing here?

“The only way that can be answered, in my mind, is by a feeling,” he said. “And, that feeling is so easily manipulated.”

You have to be a certain kind of person to do this work. Though Lalich does not do interventions any more, she is glad there are people who do it in the “legitimate way”. When I asked her who she thought did it in the “legitimate way”, she only named four people. Of them, only three, including Ryan and Kelly, were still actively taking cases.

Hachi: An Image search engine

Lobsters
eagledot.xyz
2025-11-19 12:21:33
Comments...
Original Article

Hachi: An (Image) Search engine

Only the dead have seen the end of war .. George Santayana

For quite some time now, i have been working on and off on a fully self-hosted search engine , in hope to make it easier to search across Personal data in an end to end manner. Even as individuals, we are hoarding and generating more and more data with no end in sight. Such "personal" data is being stored from local hard-disks to corporate controlled cloud-centers which makes it distributed in nature. So for following discussion, "Personal" meaning would be flexible enough to accommodate resources on a remote server and/or on different devices, as long the user could prove authentication and/or authorization to that data. Current implementation supports only "images", but eventual goal is also to support other modalities like video , text and audio , some code would be shared, while some new code would be required to better extract Features for each modality.

Such distributed nature of data and potential capabilities of current self-hosted Machine learning models to extract semantic information, only to be queried through a single interface seemed enticing enough for me start this experiment in the first place. Following post at times may seem in-coherent, as i try to articulate my thoughts on the journey of development, challenges faced and future ideas. I hope to treat this as a personal essay with multiple themes, anecdotes and even random thoughts aiming to provide a higher level view of the journey and philosophy so far in more concrete terms.
Also, following post doesn't aim to cover every technical choice and implementation in finer details, such discussions would instead be part of dedicated future posts!

Motivation:

As Humans we tend to remember different attributes/parts of an entity/information at different times, and most of search engines' interfaces refuse to accomodate that. User generally end up with an unidirectional flow of information, with no recourse of providing feedback to improve upon the on-going query. Even most advanced interfaces fail to handle the stochastic nature of queries and humans' pre-disposition towards partial information to keep moving, it should be default for search-engines to present best-effort suggestions for queries even if they couldn't be fully resolved.

I also note that, it is not always easy to model the imperfect information like handelling a mis-spelling, which itself could be mis-spelled in many ways. It would require a conscious effort to put in a better search interface, as most digital machines make it easy to model when "something" is "correct" or when something is "incorrect". Conveying "Why" something is incorrect takes a lot more code, effort and time, hence indicating that economic realities are more to blame for such cases than bad intentions!

It also presents an opportunity to analyze the capabilities of a good interface, as personal data would make it very easy to notice its limitations, which couldn't be observed through seemingly complete interfaces exposed by many e-commerce companies.

Inspired by above stated ideas, My try has been to expose multiple (if not all) attributes for a resource directly to user and then letting user recursively refine query to get to desired result. Implementation is still far from complete, but this theme has served me well to set a basic roadmap for the project. Other themes such as self-hosting, hostile behaviour towards users in terms of privacy-invading features, limited or no options to refine a search by google, github etc has contributed to evolution of this experiment. Distributed queries being served by a cluster of (refurbished) smart-phones or single-board-computers remains a lofty goal of this experiment too!

Despite all the good intentions and ideas, any search interface should pass that threshold of being fast enough to not end up as another impractical experiment. Efforts were involved from the beginning to embrace the inevitable complexity such projects come to include despite many refactorings. Below is a minimal video to help visualize the current state and capabilities of the project.

Broader Ideas:

  • Minimalism: Minimalism in terms of number of external dependencies required for this project to be bootstraped, could explain a lot about downstream choices and evolution of the project to its current form. This has of any existing (source) code if possible or writing it from scratch which itself would require reading of a lot of existing code before i could port it to extend the project in a pure source sense. If it would be practical to reuse some code from existing capable projects/databases, i would have done so but most of such projects are designed to be de-coupled from application code for good reasons, as they are supposed to offer much more guarantees and stay robust even under heavy load. Being an (embedded) part of personal application we can choose to do away with such guarantees and yet expose much more information by tightly integrating ML models pipeline. In the end, application would handle much more complex indexing and inferencing pipelines, which would require a lot more code apart from search and storage interface generally expose!

  • Experimentation: Thinking more about in terms of augmenting the existing information, rather than to duplicate it, while fusing traditional (deterministic) attributes with semantic(ML) attributes. I think this is an interesting problem and which have not been fully utilized/explored for personal applications. Most of traditional databases were written to only handle "text" modality, but current ML models allow us to query semantic information too, which opens up a new space to experiment in. I treat semantic information as necessary and independent, but not the only signal useful to implement great search interfaces.

  • Hackability: For this project i wanted it be very easy for someone to start modifying it according to their needs, and this mostly co-relates with the first point about minimalism, lesser the number of dependencies, lesser is the amount of configuration required to bootstrap the developing environment. Both Python and Nim are stable, cross-platform languages and are easier to extend just using a C compiler. Nim source code it easy to compile and/or cross-compile to on almost all platforms. There are already python bridges for many languages, so all such languages are fair game to extend the codebase in any desired way!
    Python environments (in)famously have the reputation of being difficult to bootstrap, whole parallel ecosystem is there to do so which itself creates another dependency. But i think project has made great progress in this regard, with now having a requirement of just 3 dependencies as numpy , regex and markupsafe and optionally requests , with no hard-dependence on versioning. Almost all python environments could be used to run the project with no changes, which also removes any need to bootstrap dev environment using Docker like huge dependency or any complex unwarranted build-systems plaguing many of the interesting projects. If i had money, i would pay someone to just make such projects easier to install and start with, by removing any redundant configuration or making it possible to use one common build-system !

Even though above ideas may seem worthy to follow on, there is always an on-going fight to prevent dilution of agreed upon principals. Counter-intuitively i think there is some kind of activation-enery ( https://en.wikipedia.org/wiki/Activation_energy ) requirement for each project, past that it actually is much easier to extend, modify, optimize the codebase somewhat like paying a debt to live debt free:)

There are already very capable projects like Sqlite , Lucene offering full-text search capabilities, but they implement their own storage backends which require all data to be transformed to the compatible format which leads to duplication of data . This is something i wanted to avoid, as we would be continuously transforming every newer data and this would become computationally expensive when such data wouldn't even reside on same physical machine/node. If we could get away with fast-enough queries through a much simpler index/database, that seems like something worthy to pursue further.
Most of such projects were created to handle only text queries, But current ML models expose semantic information through "vectors" or "embeddings", generated after a series of linear and non-linear operations on some text or/and an image. Top-k matching results are later retrived through a "Comparison" procedure with user query (embedding) as one of inputs. Such extensions are being gradually added in many older engines, so a hackable codebase like this project may offer more flexibilities while accomodating future ideas in this rapidly evolving field!

It leads to a design comprising a meta-data indexing engine, coupled with vector-search engines for semantic search. We never intend to duplicate the original-data and don't care where it actually resides, once indexing is done. As i think search is more about reaching to a desired file/resource before that resource could be used! Pin-pointing that resource location quickly is the major motivation by incorporating the user intentions and context recursively!

(C)Python is used as the major language for backend and Nim (and C) is used to speed up the bottleneck portions of the codebase where-ever warranted. Writing from scratch allows me to update the api as i fit to handle a bottleneck portion of the pipeline (querying or indexing), without asking or waiting for a change in some upstream dependency. Nim itself is a language with relatively smaller community, so i am getting a bit comfortable porting code from other languages to my projects with only standard library and even experimenting with my own data-structures based on (protected) reference semantics than default value semantics that Nim use!

Meta-Index:

Its a minimal module/database to handle (store and query) meta-data being extracted from resources(images) and has been written in Nim. Currently it is single-threaded, column-oriented database using Json as data-exchange mechanism between python and Nim. In future idea is to shift to leveraging multiple threads for workloads/size greater than a threshold, to better use the current hardware capabilities. It is possible to generate an auxilary index to speed up queries for a column/attribute on demand, which internally would use cache-friendly and hierichal data-structures to achieve so for most of scenarios!

Through development of this module, it has been easier to note that why most of databases end-up with some kind of dedicated query language , as situations arise requiring composing multiple operations in one go which seems like a cleaner way to model such intentions. (and this also seems to validate the requirement of a query-planner to better execute a query by analyzing the order and nature of operations and some internal details). Since it would be written for hachi itself, it remains possible for me to speed up a frequent operation by sharing a pointer/memory directly across Nim and python to prevent costly copy operations, or to directly serve raw json to the frontend in some cases without serializing and de-serializing at python boundary.

I have also experimented with multi-versioning storage design as Lmdb , to protect the original information created by code itself from user revisions. But current implementation instead favours creation of a dedicated field/attribute for user to modify/update. For example during face clustering process, backend will assign an unique Id for each new cluster , to which user may want to change to a more descriptive name, this leads to presence of attributes like personML and person in the final schema. By default, any attribute/information generated through during indexing pipeline is supposed to be immutable to be easily reset to genesis state.
It still is a bit rigid implementation, as schema is locked once initialized (lazily or explicit), as adding new columns dynamically will require me to reallocate data in the memory and more syncing logic which i am off-putting for now and will work on in the future! Current iteration supports string , int32 , float32 , bool , array[string] data-types, which seems to be enough for the application needs, but could be evolved in the future. I am not particularly content with current "string" querying, one reason is that Nim by default does not have a concept of no-copy slice, and it is difficult to even expose such a user-defined type. As strings are null-terminated, so most of other composed data-structures with string as one of fields have that underlying assumption which that user-defined type will break. Also i think for a lot of meta-data attributes, i could use ShortString kind of data-type to speed up scanning/querying by better leveraging the cache. Some of these issues are being experimented through an independent project and if found to improve performance could be implemented in this codebase too!

There are also Simd opportunities inside the "querying" code, but since its design is being guided by overall needs for the product itself, i hope to add those architecture specific optimizations only after system-design becomes stable enough for most of the features supposed to be supported!

Face-Recognition:

Being able to group same person(s) with a high probability, as another attribute to search for or mix with other attributes, would be a very quality addition to any search interface. Current DL models for some-time now have been able to distinguish faces with a high accuracy. But being able to distinguish real-life faces still requires a conformance to the pipeline such models would have been trained with.
There are multiple architectures for such models that have been proposed to tackle this problem, but most pipelines could be assumed to follow a generic flow, which begins with detection of facial bounding boxes from a given image or camera frame, then followed by detection of facial-landmarks for each such face, and ends with generation of embeddings/vectors which figuratively would represent some kind of latent representation of that face. At this point, this would be reduced to a Vector Spaces problem and hence much easier to deal with traditional tools like nearest neighbour search !

It almost always overwhelming to decide on a particular Implementation to build upon, while accommodating various factors like latency , accuracy , hardware requirements , and most of such intensive pro-bono work would never even be visible to the end-user. For me atleast this goes much further, as i would be implementing each such model using an independent ML framework, which would require me to understand also all the pre-processing and post-processing code, to be faithfully ported to Nim.
Spending time on reading papers and existing implementations helps me to get an idea about overall "capability" of the model and potential requirements during fine-tuning of the model in future. Sometimes it has been enough for me to come across an interesting concept through a paper or some nice optimization trick, even if i end up not using that particular implementation.
Most of face embeddings generation models are trained on a Siamese-loss like objective to try to explicitly distinguish both positive-positive and positive-negative pairs. This generally involves manually collecting such pairs and hence prone to bias ! Such features predictors are also very sensitive to face-alignment code used, and hence may require you to faithfully follow the training code! Dataset being used for training and choice of the objective function are two very major factors influencing the performance of any model. Leakage of evaluation data into training set has been a real issue in recent years for many experiments. Face-recognition itself is a very subjective problem and generally require more "visual" testing apart from (mathematical) metrics proposed for this problem/sub-domain.

Current pipeline uses retina-face model to predict faces and landmarks in one go which helps producing stable facial-landmarks and speeding up the pipeline. (As predicting facial-landmarks would be much cheaper from internal features than through a dedicated model, and it also helps stabilizing the training of the model). Though it could make sense to argue about a model's ability to internalize learning correlated features without adding an explicit loss, but in practice it is always (very) beneficial to use multiple losses explicitly. Interestingly, residual connection in ResNets was an important innovation making it possible to train much deeper networks at that time, even though it would be just mimicing an identity function. Residual component Residual block, see https://en.wikipedia.org/wiki/Residual_neural_network Explicit multiple losses decrease the chances of over-fitting by large. There could be other auxiliary objectives that are used during training only by means of an smaller auxiliary network and then not used/required during inference, just like training wheels :)

In my experience, dataset being used for training and choice of the objective function are two very major factors influencing the performance of your model on real-life (bit out-of-distribution datasets). I find it a good practice to always visually debug some of the random samples to get a "feel" for the dataset!

Even after having a good pipeline to generate "embeddings" for a face, clustering remains a very challenging problem, due to various reasons. Like with almost all clustering algorithms, we start out with no prior information about of the underlying (number) distribution of the data (faces). (as this is what we would be trying to estimate). As we keep encountering the newer information, possible updates through back-filling are required for the underlying index, which somewhat resembles of an auto-regressive operation and hence the error-accumulation rate is relatively high. We would also need to wait for some "initial" amount of data/information to be collected, to estimate initial stable centroids. This difficulty is further compounded by the choices for various thresholds like face-detection, some measure for blurness in the detected face, and a dependence on order of information being encountered.

As indicated, choosing same model to predict landmarks and face-bounding boxes, helps reduce the impedance mismatch that occurs when output of one model is being fed through another model. We would need to a dedicate model for facial-features though as earlier features may not be dense enough to distinguish among individual faces!

Currently Implementation works by collecting some minimal amount of information before Cluster creation process could begin. Each Cluster is a union of a set of main/master embeddings and a set of follower/slave embeddings. Selection of main embeddings is a crucial part to maintain the stability of a cluster even when new information would be encountered. Initial filtering of unfeasible (master) embeddings is done through some static criterias, for example we strive to filter any of blurred faces, face-profile is estimated through facial-landmarks, stable forward-facing profiles make face-alignment easier further in the pipeline. Such (static) criterias definitely help to reduce the number of invalid candidates, but may not be enough for many real-life datasets. A further minimal module comparing the hog-features with a set of pre-saved hog-features is introduced to help invalidate faces with sunglasses and some false positives not caught by earlier criterias!

hog comparison Hog features are finally compared at pixel level, after applying normalization! After experimenting with other approaches like SIFT-features, i found it easier to compare hog-features generated from aligned faces/eyes. Alignment part of the pipeline is crucial to generate rich embeddings, even minor deviation from reference landmarks end up producing bad-embeddings rendering the pipeline useless. All feasible candidates/embeddings are then compared sequentially to create final clusters conditioned on some threshold. Note for now this is not exhaustive and hence order in which information is being encountered would have some effect on final clusters! Remaining follower ids are also then assigned (sequentially) to one of the clusters or to a special cluster like no-categorical-info , when not able to being fit into any of the clusters. Note that a lot of empirical data comes into effect as multiple decisions would be required while choosing many thresholds and may require multiple runs .

alt ML codebase sample

Since face-recognition is very subjective and i myself have to compare other features to make sure that indeed the correct person(s) have been grouped together by the pipeline. But with a latency of around 25 ms, it seems to do very good on a held out dataset of persons with close up faces, (Zen-Z) selfies and sunglasses occluded eyes. Personal photos are much easier to classify/recognize compared to such a dataset!

For any practical ML integrated product, We would need to have a very performant concurrent pipeline to keep feeding the model while being constantly aware of any data-distribution impedance mismatch, to reach anywhere near the 'accuracy' and speed promised in a research paper. This touches upon the issue of having good understanding of software engineering basics, while being aware of possible regressions resulting from a newer functionality like ML.
Though bigger VLM/LLM (base) models have potential to handle data-impedance mismatch issues due to their sheer size, their usage would definitely hamper the application responsiveness and have proven to be relatively rigid to be fine-tuned for a specific domain!

Indexing:

Indexing pipeline begins with desired data location as its input to recursively scanning directories to collect raw-data in batches. Multiple meta attributes such as exif-data , size , mount location , name are extracted to be later queried through the Meta-indexing engine. Focus has been on designing a good schema to accomodate future use-cases, but since we would be collecting only meta-information without ever modifying the original or duplicating the original data, it remains relatively easier to shift to a newer version/schema even through automatic means.
ML models extract semantic information which can be later queried through a vector-indexing engine. By default resources to be indexed are assumed to be residing on a local-disk but any protocol could be leveraged, if proper authorization and authentication could be provided.
Monolithic nature of the code helps me to share raw-data read/collected once for various components like hash generation , preprocessing code for ML models, reducing the number of costly I/O calls. This pipeline has come a long way from a blocking implementation to its current (almost) fully async nature, resulting in very high saturation of computing resources. Apart from running multiple threads, dedicated kernels/functions are used to speed up pipeline by fusion of operations wherever possible. One such example/scenario has been shown below.

def preprocess_kernel(
    image:Tensor[uint8],
    new_shape:tuple[int,int], 
    rgb_to_bgr:bool = True, 
    normalize:bool = True):
    # Preprocess kernel, may fuse resize, color_conversion and normalization into one function!

    # Pseudo-code!

    result = newEmptyTensor[uint8](new_shape)
    for i in new_height:
        for j in new_width:
            inp_h, inp_w = get_corresponding_pixel(image, i, j)
            for k in 0..<3:
                if rgb_to_bgr:
                    result[i,j , 3-k-1] = image[inp_h, inp_w, k]
                    # normalize based on mean and deviation used for training dataset further...
                else:
                    result[ i,j,k] = image[inp_h, inp_w, k]

Each resource could be assumed to go through a flow like this:

resource_location = "file://xyz.jpg"
# OR
resource_location = "remoteProtocol://xyz.jpg"

raw_data = download_raw_data(resource_location)

embeddings = ML_model( preprocess(raw_data))
exif_data = extract_exif_data(raw_data)
preview = generate_preview(raw_data)
write_preview(preview)
....

Vector Index:

It is another minimal module to store vector-embeddings as shards on the disk. Necessary meta-data is stashed along with that shard, to make it self-contained, which in future will help in distributed/parallel retrieval. For now each shard is just a numpy (float32) Tensor, and comparison routine is a np.dot operator, which itself use the blas/openblas library to speed up this operation! Each shard is loaded from the Disk during a query , and top-k candidates are collected to be fused together with other deterministic meta-attributes. Loading from Disk do add some latency, but it allows me to regulate RAM usage through shard-size hyper-parameter, to allow running this on different platforms with diverse specifications including single-board computers. Shard-size could be kept relatively high for higher RAM systems to speed up shard querying.

Matmul is one of the most optimized algorithms which run at almost 90% of theoretical capacity on most of intel/amd Cpus when leveraging Blas like libraries. So every further optimization from here-on would involve some kind of information loss. There is a whole literature now to speed up this comparison/retrieval process through quantization and/or nearest neighbour indices like HNSW. Fast SSDs are also leveraged to run such comparisons at very high speed for upto billion vectors on just a single node in near real time!

But such all techniques involve compression of information (which itself is best-effort being the result of modeling a large amount of biased data) through out-of-band mechanisms, for example creating centroids/clusters is just based on the vector values and taking some mean without a way to pass back the information to the model which produced those vectors in the first place. This way is quick and you would get great speed-ups, and there is an active debate among vector-database vendors across various metrics and implementations. In my experience only visual results on a personal data would be a good metric a user should test for. Product-quantization is something i would be implementing if were to choose one, as i think coupled with top-k , it should work reasonably well to include (subjectively) correct results (high recall!) .

Another worthy and very effective solution i think is to instead train a linear layer to finetune the original model depending upon the task. ML Features/embeddings from a big enough model, could assumed to have a knowledge about diverse topics, but for example, a user may be trying to distinguish between different plants. A linear layer could easily be trained with just few thousand samples, to achieve so with much higher accuracy than original models, and even with half the size/dimension of original embeddings. Intuitively it could be thought that we freed the information channels to just focus on plants, decreasing the entropy model earlier had to deal with. Any such layer could be trained even without any framework, as it would just be one backward operation to implement. OpenAI has a nice cookbook if a reader would want to explore this further! https://github.com/openai/openai-cookbook/blob/main/examples/Fine-tuned_classification.ipynb

An interesting thing sharding allows is to use any available hardware to speed up retrieval. Since we need just comparison routine and corresponding shard(s) to return top-k candidates, it de-couples it from any of application code. A new smartphone could be detected, and some shards could be transferred during initial set-up, optimal percentage/number of shards could be easily calculated by running same comparsion operation on new device. Like running a 2048 x 2048 , inner-product op and comparing latency with master/main device, would tell us the capacity of the new device and so that number of shards would be transferred to speed up retrieval process!

There are performance gains to be have in the current implementation, would like to atleast start using float16 data-type, but its a bit tricky on intel cpus with no compiler support for this type. Printing of CPU capabilities do show the presence of float16 hardware support on my system ! ARM(v8 64) seems to offer native float16/floatH types, there seems to be difference in that type either supported natively by compiler or as an intrinsics/assembly code. I have not been able to get expected speed up for now! Such code is still being experimented upon in the limited time i have.

Backend:

Backend is written in python, which exposes a pure API server, to let the client/frontend to make API calls to. Starting with very naive code to just return all the meta-data for a directory to current pagination support it have gone through many revisions and design iterations and now i have much clearer idea about how to architect/wrap a big enough piece of functionality. I wanted the app to be end to end, but this also put extra pressure on app to be responsive enough for all user events. Current indexing code is capable of providing rich details such as directory currently being scanned, estimated time (eta) and allows robust Cancellation of an ongoing task/threads. It has not been easy to model such communication b/w concurrent tasks and touches upon much discussed structured-concurrency debate i.e how to run multiple tasks asynchronously, while being able to robustly cancel them at any point in time, all while being able to collect all errors cleanly!

From C days, i have been a user of (Posix) threads type implementations, since major OSes provide those minimal but stable APIs, it helps me during context switching to different languages. Both C and Nim expose that, Python itself let the OS manage threads without its own runtime implementation, but bypassing the GIL when makes sense is something user have to do to fully utilize the resources! Also this kind of code requires user to handle a lot of code as to communicate b/w threads but atleast i (think) understand the basic ideas to prevent deadlocking if occurs and iron out initial bugs. As you run such threads deeper and deeper inside application stack , it keeps getting harder to communicate information back to the client. But when it starts working, it is really cool to have a central interface to see all the stuff backend is doing and predict very good ETA !

Flask was initially used to easily map functions to a particular route/url to wrap up initial implementation, current implementation now just uses werkzeug (main engine behind flask) directly, hence doing away with a lot of unrequired dependencies like a template engines that Flask ships with. Even though this would not effect the end user in any visible way, this has been a very nice quality-of-life improvement like stuff for me as a developer. Since werkzeug is pure python, it can now be shipped/bundled directly as source code. Also each request is now handled by an available thread (from a pool) by reading http environment from a shared queue following conventional model. By default for multi-threaded option, werkzeug would create a new fresh thread for handling that request. This does away with lots of OS/system calls for each new request and latency now seems more consistent and predictive. I have also stumbled upon a pattern to actually make it easier to mount multiple apps cleanly given i never liked and even understood the blueprint that flask offers to make it easier to distribute the logic of your app to other modules too. Since WSGI protocol just expect a callable python object, it should be much easier to develop independent apps without having any knowledge where it would be called/used. It also makes it quite fun to actually write/expose python code to handle client inputs.

class SimpleApp():
    """Each instance could be used a WSGI compatible callable"""
    def __init__(self, allow_local_cors:bool = False):
        self.initialized = False
        self.http_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"] 
        self.url_map = None # we will lazily initialize it!
        self.extension_prefix = "ext" # as apps would be registered/
        self.registered_extensions:dict[str, SimpleApp] = {}

        ....

    def add_url_rule(self
                     rule:str, 
                     view_function:Callable, # corresponding view.
                     endpoint:Optional[str] = None, # set to view_function
                     methods:list[str]= ["GET"]):

        ... # some validation code.

        self.endpoint_2_uri[endpoint] = (Rule
            (rule, endpoint = endpoint), methods
            )
        self.endpoint_2_viewFunction[endpoint]  = view_function
        self.initialized = False

    def register(self, app:SimpleApp, name:str):
        """
        Here we register another such `app`.
        It would be mounted at `/ext/<name>` , so all requests to /ext/<name>/<route>, would be forwarded to this `app` .
        """

        ... # some validation code.
        self.registered_extensions[name] = app
        print("Extension registered at: {}/{}".format(self.extension_prefix, name))


    def __call__(self, environ, start_response) -> Iterable[bytes]:
        # This is called 
        if not (self.initialized):
            print("[Initializing]: Parent")
            self.initialize()

        for ext in self.registered_extensions:
            if not (self.registered_extensions[ext].initialized):
                print("[Initializing]: {}".format(ext))
                self.registered_extensions[ext].initialize()

        # If a call to such an extension.. we modify the environment a bit.
        active_app = self
        extension_name = None
        temp_path = environ['PATH_INFO']
        temp_split = temp_path.split("/")
        if temp_split[1] == self.extension_prefix:

            extension_name = temp_split[2]
            assert extension_name in self.registered_extensions, 
            extension_path = temp_path.replace("/{}/{}".format(self.extension_prefix, extension_name), "")


            environ['PATH_INFO'] = extension_path
            environ['REQUEST_URI'] = extension_path
            environ['RAW_URI'] = extension_path

            active_app = self.registered_extensions[extension_name]

    ## -----------------------------------------------
    # NOTE: only werkzeug specific code is here!
    # ---------------------------------------------
    request = Request(environ = environ) # minimal wrapping code!
    urls = active_app.url_map.bind_to_environ(environ)
    endpoint, args = urls.match()

    # view function can choose to return iterable[bytes] are the result of view function or call , or further wrap it to be as expected by werkzeug!
    iterable_bytes = active_app.endpoint_2_viewFunction[endpoint](request, **args) 
    return iterable_bytes  # as WSGI protocol expects!
    # ---------------------------------------------------------

Note that, any existing Python object, can be made to accept client requests on demand by adding very minimal code and could be done for selective functionality. For example, during setup of a new android device, i may have to ask user to choose one of the existing devices , this kind of interactive input can be modeled easily now, as i just add a new routine in the Corresponding class to accept requests on a route such as /ext/android/beginSetup , once i get that, all the existing logic already written could be used to finish setup. It is as easy as parent_app.register(app = thisApp, name = "android") to start routing corresponding requests to this app!

ML:

Machine learning is being powered by a framework written completely in Nim, most of work was done on that framework before i even stared working on this project. This has allowed me to wrap CLIP and Face-Recognition Pipeline along with the application while only depending on OneDNN for some routines. OneDNN (mkldnn) ( https://github.com/uxlfoundation/oneDNN ) is one of the libraries to speed up various Deep learning operations with great documentation.

Ported models run faster on intel/Amd Cpus than pytorch counterparts, owing to fusion of operations like Batch Normalization and Convolution, and high re-use of pre-allocated memory (similar to in-place operations). Current torch.compile like engine would end up making some of those optimizations after analyzing the graph, but for at-least 2.0 version it is not supported on Windows for me to compare against!

It took a lot of effort during one-two years i was working on it to be complete enough for me to start porting Deep-learning models using it. Also OneDNN shifted to V3 during that time, and only some code was updated to newer API and this has left the project in a unstable state with no visible stable APIs for users to work with. For each model i have to manually analyze the locations/requirements for fusion of operations, port quite a lot of pre-processing and post-processing code to make it end to end. These reasons contributed to a lot of technical debt, which i have not found the resources to tackle yet. Without removing that debt it never made sense to open-source it, besides there are now projects like GGML, and tiny-grad to serve inference only needs with minimal resources!

alt ML codebase sample

alt ML codebase sample

Porting of each model is quite an involved task, as you have to read enough papers to understand ideas about model if want to later fine-tune that model too. You may want to find first find or create a simpler implementation in pytorch to make it easier to port to a new language. All experimentation could be done in pytorch/python, for example i experimented with alternate quantized attention layers for CLIP model, and it indeed had a better performance for eval datasets mentioned in CLIP paper. Tangentially it was really cool to read through Open-AI implementations and papers, papers were written in an approachable manner to let the read indulge in hypothesis, codebases were clean with minimal dependencies. Its really a shame what that company/organisation chose to become under the guise of "user-safety" effectively clipping the (open) ethos of this field, but at same time i am grateful for all the researchers' work in this current DL/ML era and seeing the evolution of this field in such an open manner!

I would like to work on the project though atleast enough to tackle that debt and open-source it in state for users to extend upon, if found useful. Even though i am using OneDNN for some routines, i think it is better to have a common and easier to extend codebase to allow more experimentation and aggressive optimizations , but this itself is a huge-task and now with multiple GPU architectures its just something that couldn't be tackled without a lot of time and money. Even in this age where H100 is the baseline for benchmarks in testing, i find it worthwhile to work on a minimal DL Compiler to just tackle ARM/Intel/Risc Cpus to start taking advantage of these cheaper machines. Being able to pin-point a tennis ball in a 3D space remains the dream !

Frontend / App:

Current front-end is completely written in Html, Js(Ts) and (tailwind) css as multi page webapp. Earlier frontend was written in Svelte, but lack of internal documentation and too much "magic" became too "viral" for me to handle. For me, abstractions and APIs exposed by Browsers are more than enough to maintain required precision during DOM updates. Care is taken to use batch updates, prevent redundant rendering, judicial usage of resources to prevent unrequired pressure through pagination, even for a local backend server. It has passed our litmus test for search over 180 Gb of indexed Pexels dataset on a (minimal) remote server. My friend Akshay helped a lot in frontend development, testing various datasets and offering detailed bug reports which helped uncover a lot of edge cases during development of the project. There would always be room for improvements on the UX/UI side, but we have found it is much easier to extend and improve frontend with a stable backend!

Pexels dataset: https://huggingface.co/datasets/opendiffusionai/pexels-photos-janpf

Apart from webapp, there is also a Windows App, which under the hood uses the webview to render the frontend. All native Windows APIs remain available to use from the Nim code, which puts it into a hybrid category. It is not ideal, but atleast it doesn't require me to ship a full web-browser, which i think is waste of compute resources, but at the same time leaves me wondering how current GUI development became so resource intensive for a single developer to manage while offering little benefits! I have been looking into forks of earlier GTK versions for linux to keep the complexity/learning contained, but that also seems nothing less than an adventure!

Tools/libraries:

  • Nimpy ( https://github.com/yglukhov/nimpy ) : A minimal python-Nim bridge to make it easier to write extensions in Nim to be called from python and to use python modules in Nim. Unlike many such bridges which includes a lot of boiler-plate code, there are no complex classes/interfaces to be included in the extension. It targets necessary features like marshaling of native python types to and from Nim, targets the minimal Python API to not depend on python versions, finding underlying python.dll at runtime.

  • Stb Image ( https://github.com/nothings/stb ): A big fan of such single header libraries, this one implements encoders for most of image formats in pure C. Its very easy to modify it pass pointer to the raw-data and writing raw-data to a pre-allocated memory saving costly memory copying particularly visible for 4k photos! It helps remove dependency on OpenCV for image reading ! Nim made it very easy to just compile this along with other Nim code.

  • LibWebp ( https://github.com/webmproject/libwebp ): Allows decoding and encoding for webp formats, Though documentation is a bit sparse on some internal API usage, lot of examples are included in the repository to read. I managed to use argb field directly to pass argb format data to do away with transformation logic and some (memory) allocations. It follows callback passing convention to implement custom behaviour like a progress bar and to write encoded data to a user provided buffer. Written completely in C and very easy to compile and read, it is being used for writing image previews, helping remove dependency on OpenCV.

  • Zig-cc ( https://ziglang.org ): Using zig/clang as a C compiler, allowed me to easily cross-compile a lot of Nim code for Linux, targeting 2.27 libc . Making it easier to set a LibC target has proved very useful to bypass that libC mismatching stuff! Really cool work by Zig community to tackle a lot of such technical debt to make software development much easier !

As mentioned earlier i try to use a lot of existing open-source code if i can, even it would be for reading/understanding purposes only. It still blows my mind even after many years, to just read/understand some complex implementation and modify it for personal use-case for Free . For example even though OpenCV is a big/complex dependency, its still has a very readable codebase and i read code from it a few times during this project to understand differences b/w my port and OpenCV one.

Being able to integrate multiple languages has its own challenges too, as it would require us to understand boundaries, internal details, assumptions that each runtime would want developer to respect. It gets complex to reproduce and understand bugs while running multiple multi-threaded runtimes as debugging gets more difficult. Debugging is one of things i would like to get better at, i have very limited knowledge of GDB as of now, which is expected to be table stakes for debugging in such environments. I have had some nasty bugs , but being able to compile all required pieces made it a bit easier to debug even with print-style debugging :)

Current State:

A lot of functionality is working, than not and having tested over 500k images i could be a bit satisfied about internals' performance and robustness. I would like to say that it can easily handle 10 millions of images/resources, and there is nothing to suggest that it won't, but it is different from using a production database to extrapolate the performance confidently. Despite writing from (almost) scratch in a number of languages, both indexing and inferencing pipeline are more expressive, robust and faster than many similar images search apps, but benchmarking for such complex apps could be subjective and more so when you mix in semantic search.

There are still some hardcoded constants and also intentionally some low performing components, like using ViT B/32 variant of CLIP model, which are acting as placeholders, and would be replaced easily with better counterparts in the future.

It has been tested on Windows 10/11 and on Fedora 42/43 with an assumption of x64 architecture. Compiled extensions are also packaged to quickly test the application, but users are free to compile code as they see fit. Linux shared objects target LibC 2.27 , so should work on most of recent distributions out of the box. Except some ML code there is main requirement of any/a C compiler to further extend the codebase by the user. Most of testing is done on my Laptop with i5-8300H processor and 8 GB memory. I don't have a MacOS to test on, ML code would need to be modified to target ARM architecture, except that very minor modifications should be needed if any. It is quite possible for initial users to encounter minor bugs, due to its limited run in diverse dev environments, but installation and usage on Cloud servers during testing has been quite smooth.

Below is a video showcasing workflow to index data from multiple MTP/Android devices. (Still a WIP).

Random Thoughts/Philosophy:

I think it gets easier with time to grok larger codebases to isolate/find the functionality/implementation reader would be interested in. Most of mature codebases are organized to help navigating the source-tree anyway, and have detailed documentation. Being able to have enough patience to make yourself comfortable is a necessary part of growing as a developer, as initial documentation/codebase would always seem alien and big enough to trigger that flight reaction!

Batching and Caching are two generic strategies that could be applied to speed up most of bottleneck portions. Both strategies lead to better/denser utilization of CPUs by (trying to) minimise the costly load/store instructions during a hot loop. Batching for example could do it by allocating necessary memory up-front for a batch and de-allocating all at once when no longer required, reducing the number of costly system-calls. Caching may involve designing or using a (hardware)cache friendly data-structure, when it is possible to do so.

Each optimization would involve assumptions and each subsequent optimization would become harder and harder to implement, may preventing the clean refactoring of code when future functionalities may need to be accommodated. It itself is a kind of rabbit-hole, and user should know when to stop as there would always be something else to be optimized!

With (coding) tools involving AI/LLMs it is easier than ever to get a piece of desired functionality, as a developer i understand it is another useful tool in a long-history of improvements, that most of developers would come to use in their workflow. Current LLMs have undeniable ability to handle complex instructions, explain non-trivial code and that so for various mixed modalities! It has been a bit unreasonable to end up with such abilities with just next token prediction as primary objective, even for a researcher working in this field. My usage for such tools is only through a (free) search engine(s), Although for now there has been no scenario in such tools have helped me, that i wouldn't have got to using traditional means. But i can admit such tools/engines are really effective in helping us to get unstuck in a variety of situations, arguably helping us to learn faster. Functions/routines are nice and enough abstractions to provide enough context to such engines, to get the required help , without ever needing review/edit/rewrite cycle.
I have always been benefited from visiting the original documentation, if AI is spitting out good enough arguments, there must be a good documentation out there for that topic . Our minds capability to extract abstract patterns resulted from studying one topic and applying it to another seemingly unrelated domain is uncanny to say the least. Also tone/motivation for developer writing about a topic matters to me, and many times i have visited a concept further just because writer himself/herself was very excited about it . Again, these are just personal factors and biases and people should be free to choose workflow they feel most comfortable in , without any judgments from either side.
It has been difficult to access SOTA models actual abilities, with fewer and fewer details being published for each newer version, but it has been a wild-ride for me to see the evolution from RNNs to bi-directional RNNs to LSTMs to Transformer architecture (finally founding atleast one stable architecture be able to support training on whole internet without exploding or vanishing gradients). Arguably there are also more more open family of models like Qwen or Deepseek from other labs which could run on local infrastructure. Even at this stage, ideas behind LLMs are simple enough for anybody to understand without burdening them with terms like AGI . There is already great work from OLMO and Smollm to build upon and start with, for personal needs, without spending a lot of money. On technical front there is still much more to explore and it comes down to doing more experiments by smaller companies to prevent ending up with another monopoly/duopoly in this field only to later blame such for their incompetence!
I literally have no idea what would be the end game with this ever increasing ability of AI models and what social consequences we would end up with in an already fragmented and struggling world. But it would be a mistake to abandon learning , however inconvenient it may seem at any time, if we were to survive !
Thing that really boils my blood is these (AI) companies lawless laundering of all the open-source code, art, poetry without any attribution only to be packaged as a product for users to pay for. Constant attacks on all the infrastructure even run by very small or single-developer companies/communities, not respecting any of the robots.txt , proxying through residential networks, damaging the very core of the information-sharing/internet while coming up with ironical headlines is bordering on criminal-behaviour for me! Waiting for tens of seconds just for a (community written) stack-overflow post through many layers of security , for wanting to understand various perspectives for some concept without all the bullshit summarization, is new bleak reality with nothing for end-users to have a say in.

Despite the dominant usage of LLMs there exist equally interesting smaller models/architectures representing the huge potential that this field of deep-learning holds. Neural-networks allow us to (good enough)model any arbitrary function/flow using an iterative framework from a few thousand samples representing the function space, effectively equipping us with a very power statistical tool Self-supervised learning don't even need explicit outputs, how cool is that.. See https://ai.meta.com/blog/dino-v2-computer-vision-self-supervised-learning/ this work for more information. to introduce a new independent signal to reduce the entropy of the problem in many domains. I am a fan of smaller personalized models' potential to tackle everyday problems, and myself uses cheap off-the-self cameras coupled with a DL model to detect those Damn Monkeys, and for local voice-synthesis. Monkey Capturing was even on the manifesto of one of the candidates at city-level elections! In country like India, where even (traditional) Automation is limited to products of very few big companies, I can't help smiling whenever i point remote at my "AI" controlled AC :)

Living in a two-tier town in northern India with very minimal fixed-costs has allowed me to work on this for quite a long time without any savings or continuous financial freedom. But i cannot be a hypocrite about it, as it was a conscious decision to learn, explore and implement some of the ideas i had for some time. In return, this has allowed me to stay in touch with friends, played a lot of outdoor games, and help me in reflecting on the things i would want to spend more time in future.

Timely financial grants during the last one and half year from Samagata foundation and FossUnited has allowed me to complete a bulk of work to point, where i am satisfied with the current state of the project, for which i will always be grateful.

I would very much like to continue on this or adjacent projects, as there are still a lot of ideas and code pending, to make it a very stable everyday engine for users to use . But for that i will have to figure out a way to sustain this , without ever compromising the Core features/functionality in any way, As those were some of reasons i started working on it in the first place! Extensions to allow indexing remote storage like Google Drive or Android devices smoothly from the app itself seems like a good direction in that regard for now!

The lost cause of the Lisp machines

Lobsters
www.tfeb.org
2025-11-19 12:17:50
Comments...
Original Article

:: lisp , stupidity , stories

I am just really bored by Lisp Machine romantics at this point: they should go away. I expect they never will.

History

Symbolics went bankrupt in early 1993 . In the way of these things various remnants of the company lingered on for, in this case, decades. But 1983 was when the Lisp Machines died.

The death was not unexpected: by the time I started using mainstream Lisps in 1989 1 everyone knew that special hardware for Lisp was a dead idea. The common idea was that the arrival of RISC machines had killed it, but in fact machines like the Sun 3/260 in its ‘AI’ configuration 2 were already hammering nails in its coffin. In 1987 I read a report showing the Lisp performance of an early RISC machine, using Kyoto Common Lisp , not a famously fast implementation of CL, beating a Symbolics on the Gabriel benchmarks [PDF link].

1993 is 32 years ago. The Symbolics 3600, probably the first Lisp machine that sold in more than tiny numbers, was introduced in 1983, ten years earlier. People who used Lisp machines other than as historical artefacts are old today 3 .

Lisp machines were both widely available and offered the best performance for Lisp for a period of about five years which ended nearly forty years ago. They were probably never competitive in terms of performance for the money.

It is time, and long past time, to let them go.

But still the romantics — some of them even old enough to remember the Lisp machines — repeat their myths.

‘It was the development environment’

No, it wasn’t.

The development environments offered by both families of Lisp machines were seriously cool, at least for the 1980s. I mean, they really were very cool indeed. Some of the ways they were cool matter today, but some don’t. For instance in the 1980s and early 1990s Lisp images were very large compared to available memory, and machines were also extremely slow in general. So good Lisp development environents did a lot of work to hide this slowness, and in general making sure you only very seldom had to restart everthing, which took significant fractions of an hour, if not more. None of that matters today, because machines are so quick and Lisps so relatively small.

But that’s not the only way they were cool. They really were just lovely things to use in many ways. But, despite what people might believe: this did not depend on the hardware : there is no reason at all why a development environent that cool could not be built on stock hardware. Perhaps, (perhaps) that was not true in 1990: it is certainly true today.

So if a really cool Lisp development environment doesn’t exist today, it is nothing to do with Lisp machines not existing. In fact, as someone who used Lisp machines, I find the LispWorks development environment at least as comfortable and productive as they were. But, oh no, the full-fat version is not free, and no version is open source. Neither, I remind you, were they.

‘They were much faster than anything else’

No, they weren’t . Please, stop with that.

‘The hardware was user-microcodable, you see’

Please, stop telling me things about machines I used : believe it or not, I know those things.

Many machines were user-microcodable before about 1990. That meant that, technically, a user of the machine could implement their own instruction set. I am sure there are cases where people even did that, and a much smaller number of cases where doing that was not just a waste of time.

But in almost all cases the only people who wrote microcode were the people who built the machine. And the reason they wrote microcode was because it is the easiest way of implementing a very complex instruction set, especially when you can’t use vast numbers of transistors. For instance if you’re going to provide an ‘add’ instruction which will add numbers of any type, trapping back into user code for some cases, then by far the easiest way of doing that is going to be by writing code, not building hardware. And that’s what the Lisp machines did.

Of course, the compiler could have generated that code for hardware without that instruction. But with the special instruction the compiler’s job is much easier, and code is smaller. A small, quick compiler and small compiled code were very important with slow machines which had tiny amounts of memory. Of course a compiler not made of wet string could have used type information to avoid generating the full dispatch case, but wet string was all that was available.

What microcodable machines almost never meant was that users of the machines would write microcode.

At the time, the tradeoffs made by Lisp machines might even have been reasonable. CISC machines in general were probably good compromises given the expense of memory and how rudimentary compilers were: I can remember being horrified at the size of compiled code for RISC machines. But I was horrified because I wasn’t thinking about it properly. Moore’s law was very much in effect in about 1990 and, among other things, it meant that the amount of memory you could afford was rising exponentially with time: the RISC people understood that.

‘They were Lisp all the way down’

This, finally, maybe, is a good point. They were, and you could dig around and change things on the fly, and this was pretty cool. Sometimes you could even replicate the things you’d done later. I remember playing with sound on a 3645 which was really only possible because you could get low-level access to the disk from Lisp, as the disk could just marginally provide data fast enough to stream sound.

On the other hand they had no isolation and thus no security at all: people didn’t care about that in 1985, but if I was using a Lisp-based machine today I would certainly be unhappy if my web browser could modify my device drivers on the fly, or poke and peek at network buffers. A machine that was Lisp all the way down today would need to ensure that things like that couldn’t happen.

So may be it would be Lisp all the way down, but you absolutely would not have the kind of ability to poke around in and redefine parts of the guts you had on Lisp machines. Maybe that’s still worth it.

Not to mention that I’m just not very interested in spending a huge amount of time grovelling around in the guts of something like an SSL implementation: those things exist already, and I’d rather do something new and cool. I’d rather do something that Lisp is uniquely suited for, not reinvent wheels. Well, may be that’s just me.

Machines which were Lisp all the way down might, indeed, be interesting, although they could not look like 1980s Lisp machines if they were to be safe. But that does not mean they would need special hardware for Lisp: they wouldn’t. If you want something like this, hardware is not holding you back: there’s no need to endlessly mourn the lost age of Lisp machines, you can start making one now. Shut up and code.

And now we come to the really strange arguments, the arguments that we need special Lisp machines either for reasons which turn out to be straightforwardly false, or because we need something that Lisp machines never were .

‘Good Lisp compilers are too hard to write for stock hardware’

This mantra is getting old.

The most important thing is that we have good stock-hardware Lisp compilers today . As an example, today’s CL compilers are not far from CLANG/LLVM for floating-point code. I tested SBCL and LispWorks: it would be interesting to know how many times more work has gone into LLVM than them for such a relatively small improvement. I can’t imagine a world where these two CL compilers would not be at least comparable to LLVM if similar effort was spent on them 4 .

These things are so much better than the wet-cardboard-and-string compilers that the LispMs had it’s not funny.

A large amount of work is also going into compilation for other dynamically-typed, interactive languages which aim at high performance. That means on-the-fly compilation and recompilation of code where both the compilation and the resulting code must be quick. Example: Julia . Any of that development could be reused by Lisp compiler writers if they needed to or wanted to (I don’t know if they do, or should).

Ah, but then it turns out that that’s not what is meant by a ‘good compiler’ after all. It turns out that ‘good’ means ‘compillation is fast’.

All these compilers are pretty quick: the computational resources used by even a pretty hairy compiler have not scaled anything like as fast as those needed for the problems we want to solve (that’s why Julia can use LLVM on the fly). Compilation is also not an Amdahl bottleneck as it can happen on the node that needs the compiled code.

Compilers are so quick that a widely-used CL implementation exists where EVAL uses the compiler, unless you ask it not to.

Compilation options are also a thing: you can ask compilers to be quick, fussy, sloppy, safe, produce fast code and so on. Some radically modern languages also allow this to be done in a standardised (but extensible) way at the language level, so you can say ‘make this inner loop really quick, and I have checked all the bounds so don’t bother with that’.

The tradeoff between a fast Lisp compiler and a really good Lisp compiler is imaginary, at this point.

‘They had wonderful keyboards’

Well, if you didn’t mind the weird layouts: yes, they did 5 . And has exactly nothing to do with Lisp.

And so it goes on.

Bored now

There’s a well-known syndrome amongst photographers and musicians called GAS: gear acquisition syndrome. Sufferers from this 6 pursue an endless stream of purchases of gear — cameras, guitars, FX pedals, the last long-expired batch of a legendary printing paper — in the strange hope that the next camera, the next pedal, that paper, will bring out the Don McCullin, Jimmy Page or Chris Killip in them. Because, of course, Don McCullin & Chris Killip only took the pictures they did because he had the right cameras: it was nothing to do with talent, practice or courage, no.

GAS is a lie we tell ourselves to avoid the awkward reality that what we actually need to do is practice , a lot, and that even if we did that we might not actually be very talented.

Lisp machine romanticism is the same thing: a wall we build ourself so that, somehow unable to climb over it or knock it down, we never have to face the fact that the only thing stopping us is us.

There is no purpose to arguing with Lisp machine romantics because they will never accept that the person building the endless barriers in their way is the same person they see in the mirror every morning. They’re too busy building the walls.


As a footnote, I went to a talk by an HPC person in the early 90s (so: after the end of the cold war 7 and when the HPC money had gone) where they said that HPC people needed to be aiming at machines based on what big commercial systems looked like as nobody was going to fund dedicated HPC designs any more. At the time that meant big cache-coherent SMP systems. Those hit their limits and have really died out now: the bank I worked for had dozens of fully-populated big SMP systems in 2007, it perhaps still has one or two they can’t get rid of because of some legacy application. So HPC people now run on enormous shared-nothing farms of close-to-commodity processors with very fat interconnect and are wondering about / using GPUs. That’s similar to what happened to Lisp systems, of course: perhaps, in the HPC world, there are romantics who mourn the lost glories of the Cray–3. Well, if I was giving a talk to people interested in the possibilities of hardware today I’d be saying that in a few years there are going to be a lot of huge farms of GPUs going very cheap if you can afford the power. People could be looking at whether those can be used for anything more interesting than the huge neural networks they were designed for. I don’t know if they can.


California man admits to laundering crypto stolen in $230M heist

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 12:13:34
A 45-year-old from Irvine, California, has pleaded guilty to laundering at least $25 million stolen in a massive $230 million cryptocurrency heist. [...]...
Original Article

Crypto theft

A 45-year-old from Irvine, California, has pleaded guilty to laundering at least $25 million stolen in a massive $230 million cryptocurrency heist.

Kunal Mehta (also known as "Papa," "The Accountant," and “Shrek") is the eighth defendant to plead guilty for his participation in this scheme following charges brought by the Department of Justice in May 2025.

According to court documents , the defendant was part of a large group that, through social engineering, gained access to victims' cryptocurrency accounts between October 2023 and March 2025 and transferred funds into crypto wallets under their control.

Wiz

The crime ring included members (mostly 18-, 19-, and 20-year-olds) from California, Connecticut, New York, Florida, and abroad, and it grew through friendships developed while playing online games. Mehta served as a money launderer for the group, while others were involved in organizing, identifying targets, hacking, making calls, and stealing hardware crypto wallets.

In total, fourteen suspects were charged for their alleged involvement in the theft and laundering of $230 million in cryptocurrency, including 20-year-old Malone Lam (aka "Greavys," "Anne Hathaway," and "$$$") and 21-year-old Jeandiel Serrano (aka "Box," "VersaceGod," and "@SkidStar") who were arrested in Miami in September 2024 .

The following defendants, who were indicted in May 2025 , also face charges of obstruction of justice and conspiracy to commit wire fraud, besides cyber-enabled racketeering conspiracy and money laundering:

  • Marlon Ferro, 19 (Santa Ana, California)
  • Hamza Doost, 21 (Hayward, California)
  • Conor Flansburg, 21 (Newport Beach, California)
  • Ethan Yarally, 18 (Richmond Hill, New York)
  • Cody Demirtas, 19 (Stuart, Florida)
  • Aakash Anand, 22 (New Zealand)
  • Evan Tangeman, 21 (Newport Beach, California)
  • Joel Cortes, 21 (Laguna Niguel, California)
  • First Name Unknown-1, Last Name Unknown-1 aka "Chen" and "Squiggly" (location unknown)
  • First Name Unknown-2, Last Name Unknown-2 aka "Danny" and "Meech" (location unknown)
  • John Tucker Desmond, 19 (Huntington Beach, California)

In an August 18th attack, Lam and another accomplice stole over 4,100 Bitcoin from a Washington, D.C., victim, which was worth over $230 million (now valued at more than $384.5 million). They reportedly laundered the stolen cryptocurrency using crypto mixers and exchanges, "peel chains," pass-through wallets, and virtual private networks (VPNs) to hide their locations and identities.

Stolen crypto being transferred
Stolen crypto being transferred (ZachXBT)

However, while most of the stolen cryptocurrency was converted to Monero to hide the attackers' identity, they reportedly made critical errors, inadvertently linking the laundered funds to the original amounts stolen.

"Mehta created multiple shell companies in 2024 for the purpose of laundering funds through bank accounts created to give the appearance of legitimacy. To facilitate crypto-to-wire money laundering services, Mehta received stolen cryptocurrency from the group, which they had already laundered," the DOJ said on Tuesday.

"Mehta then transferred the cryptocurrency to associates who further laundered it through sophisticated blockchain laundering techniques. The solen funds returned to Mehta’s shell company bank accounts through incoming wire transfers from additional shell companies organized by others throughout the United States."

The investigators found that Mehta would typically charge a 10% fee for his services, which included converting stolen cryptocurrency to cash and making wire transfers for the group.

The stolen cryptocurrency was used to finance the group's lavish lifestyles, which allegedly included spending the funds on private jet rentals, at least 28 luxury cars (worth between $100,000 and $3.8 million), private security guards, designer handbags, high-end watches, nightclub outings, and international travel.

"Mehta is the eighth defendant to plead guilty for his role in this scheme," added FBI Special Agent in Charge Reid Davis this week. "Today's plea reaffirms the FBI's commitment to exposing fraudsters and should remind Americans to beware of online scammers: Do not reply to calls, emails, or texts that request personal information, such as your password, PIN, or any one-time passwords that are sent to your email or phone."

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Legal Restrictions on Vulnerability Disclosure

Schneier
www.schneier.com
2025-11-19 12:04:50
Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the earl...
Original Article

Kendra Albert gave an excellent talk at USENIX Security this year, pointing out that the legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities—exactly the opposite of what the responsible disclosure movement of the early 2000s was supposed to prevent. This is the talk.

Thirty years ago, a debate raged over whether vulnerability disclosure was good for computer security. On one side, full disclosure advocates argued that software bugs weren’t getting fixed and wouldn’t get fixed if companies that made insecure software wasn’t called out publicly. On the other side, companies argued that full disclosure led to exploitation of unpatched vulnerabilities, especially if they were hard to fix. After blog posts, public debates, and countless mailing list flame wars, there emerged a compromise solution: coordinated vulnerability disclosure, where vulnerabilities were disclosed after a period of confidentiality where vendors can attempt to fix things. Although full disclosure fell out of fashion, disclosure won and security through obscurity lost. We’ve lived happily ever after since.

Or have we? The move towards paid bug bounties and the rise of platforms that manage bug bounty programs for security teams has changed the reality of disclosure significantly. In certain cases, these programs require agreement to contractual restrictions. Under the status quo, that means that software companies sometimes funnel vulnerabilities into bug bounty management platforms and then condition submission on confidentiality agreements that can prohibit researchers from ever sharing their findings.

In this talk, I’ll explain how confidentiality requirements for managed bug bounty programs restrict the ability of those who attempt to report vulnerabilities to share their findings publicly, compromising the bargain at the center of the CVD process. I’ll discuss what contract law can tell us about how and when these restrictions are enforceable, and more importantly, when they aren’t, providing advice to hackers around how to understand their legal rights when submitting. Finally, I’ll call upon platforms and companies to adapt their practices to be more in line with the original bargain of coordinated vulnerability disclosure, including by banning agreements that require non-disclosure.

And this is me from 2007, talking about “responsible disclosure”:

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

Tags: , , ,

Posted on November 19, 2025 at 7:04 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

Pimped Amiga 500

Hacker News
www.pimyretro.org
2025-11-19 12:02:49
Comments...
Original Article

Back in the early ’90s, I had an Amiga 2000 with just one expansion card: a SCSI controller paired with a massive 290 MB hard drive. Getting software and games to run from the hard drive—with only 1 MB of chip RAM—required a lot of tricks. But it was fun, and it taught me a lot about computers.

A few months ago, I stumbled upon a cheap Amiga 500, and I couldn’t resist. I decided to restore it from the ground up and add a GottaGoFast RAM + IDE controller to finally build what would have been my dream machine in 1990: an Amiga running OS 1.3 with fast RAM !

This is the story of my pimped Amiga 500: 1 MB chip RAM, 8 MB fast RAM, and 512 MB of storage . Quite a beast for its time! 🙂

Used Materials

Here is the hardwares pieces I used:

  • Amiga 500 I bought with a “512K memory expansions”
  • IDE68K + GottaGo FastRAM 8MB from AmigaStore.eu
  • A 512M CompaqFlash card (LIMEI, “professional grade”)
  • A 40 pin 3.5in IDE ribbon cable
  • A dremel to creare a compaqflash slot
  • Some (dupond) wires and solder
  • Some pin headers
  • A multimeter
  • Isopropyl alchool
  • Q-tips
  • Facom “Contact Spay”
  • Ambro-sol galvanized zinc spray paint
  • A driller and a dremel

First boot

At first, I had a hard time getting a Workbench disk to boot properly — even though the game disks I tested worked just fine. (They probably have better error correction routines.)

Each time I tried to start Workbench from different floppies, I ran into either “Read/Write Error” or “Please insert disk in drive 0!” messages. After several attempts and a few frustrating retries, I finally managed to reach a command prompt.

That’s when I noticed something strange: the system was reporting 1 MB of chip RAM . Wait a second — this is an Amiga 500 , not a 500+! Even with a memory expansion, it should normally show 512 KB chip RAM and 512 KB slow RAM . This means my A500 must have been modified to convert the slow RAM into chip RAM . (For reference: “slow RAM” sits on the same bus as chip memory, but it’s not directly addressable by the custom chips.)

Inside

Of course, I found a memory expansion installed: a SupraRam 500 Rev.2 ( details here ), identifiable by the four DIP switches. It’s a very neat card — the battery uses a standard coin cell, which is much less prone to leaking than typical NiMH batteries.
Here’s a look at the expansion card inside the machine:

The motherboard is a Rev 6A , which is internally ready for 1 MB of chip RAM but only has 512 KB installed. Judging by the setup, this Amiga seems to have been modified to provide 1 MB chip RAM : the JP7A jumper is fully open, and JP2 has pins 1 and 2 shorted!
As you can see in this photo, the jumpers reveal the modification:

Inside, there’s a fat Agnus 8372A (capable of addressing 1 MB chip RAM) paired with a Denise R8 (OCS) rather than a SuperDenise (ECS). While it’s not an ECS setup, this combination at least allows Extra Half-Brite (EHB) mode .
The Agnus and Denise chips are shown here, highlighting the OCS configuration:

Hardware restoration

Plastics

The plastics on this Amiga were just a bit yellowed — nothing too severe. I was able to recover them easily using the same Retrobright box I used for my pimped Amiga 600 .

The power supply , however, had a noticeably stronger yellow tint compared to the other parts. I applied Retrobright to all components, and for the power supply, I gave it a longer exposure . It hasn’t fully returned to its original color, but it’s much improved.

On the left: before cleaning and Retrobright; on the right: after treatment:

Metallic shield

Both the upper and lower shield parts were in poor condition, showing some corrosion. While these shields aren’t strictly necessary for the Amiga to function, I wanted to keep my A500 as authentic as possible .

I treated the metal with Ambro-Sol spray (98% zinc) — a kind of metallic paint that also protects against corrosion. Before painting, I lightly sanded all corroded areas to ensure a smooth finish. The paint has a matte finish , which I actually prefer over the original look.

On the left: before painting; on the right: after treatment:

Keyboard

The keyboard was covered in dust and had a noticeable yellow tint. I removed all the keys to thoroughly clean each part and also subjected them to the Retrobright process .

Unfortunately, I didn’t take any photos of the cleaned keyboard on its own, but the results should be visible in the overall photos of the restored A500.

The mouse

The mouse wasn’t working properly and showed several issues:

  • The classic problem: dirty internal wheels .
  • The spring on the white wheel, which ensures the ball touches the encoder wheels, was too loose, so the ball didn’t make proper contact.
  • The right mouse button was dead or broken.

I replaced the right button with a new one — slightly more “clicky,” but it didn’t require any extra pressure to use.

Next, I cleaned all internal parts using alcohol and some Q-Tips, and I retensioned the spring by gently pulling both sides at the same time.

The final result: a cleaner interior and a fully functional, “like new” mouse.
Here’s the after look inside the mouse:

Floppy drive

The floppy drive in my Amiga 500 is a Matsushita JU-253-031P , recognizable by its plain black top cover over the mechanism. While it gives a clean look, it also makes dust removal more challenging compared to other drives.

I carefully used Q-Tips to remove as much dust as possible , paying special attention to the read/write heads , which are still easily accessible and crucial for reliable disk reading.

Additionally, I had to resolder the wires on the small floppy detector button, which had been causing the “Please insert disk in drive” errors during reading.

Here’s a look at the drive during cleaning and after reassembly:

Motherboard & Memory card

The motherboard was in pretty good condition . I simply applied some FACOM Contact Spray , which helps remove dust, humidity, and oxidation. It’s said to also provide some protection for the circuits — well, it certainly can’t hurt!

I did the same for the memory expansion card . Additionally, I replaced the soldered battery with a battery holder , making the setup cleaner and allowing the battery to be easily swapped in the future.

Here’s a look at the motherboard and memory card after cleaning and the battery upgrade:

Extentions

I installed the IDE68k + GottaGoFastram combo along with the patched Kickstart ROM that allows booting directly from an IDE device. I also picked up a 512 MB CompactFlash card , which provides more than enough space — considering that back in the mid-80s, even 20 or 40 MB felt enormous.
The patched Kickstart 1.3 includes an scsi . device , making it possible to boot from the emulated hard drive (the CF card). Without it, you would need to boot from a floppy — just like some disk controllers required back in the day.

Booting from the IDE interface requires two signals: /INT2 and /OVR .
The kit comes with Dupont wires and small clip-style “pliers” to grab these signals respectively from pin 21 of CIA A and pin 29 of Gary .
I wasn’t a fan of this approach — the clips are fragile and can easily detach when moving the Amiga.

Both signals are actually available on the Zorro II 86-pin connector next to the 68000 CPU (see: mklboards.fi ).
So I decided to solder both wires directly to the Zorro II connector. It’s cleaner, safer, and mechanically rock-solid.

Here are the tests I ran before finalizing the modification:

At first, the CF wasn’t powering up. Pin 20 of the IDE connector should provide +5 V for powering CF cards, but I measured 0 V .
I ended up taking +5 V from the keyed pin on the adapter and wiring it directly to the CF’s 5 V pin.
It seems something is missing from the Amigashop.eu hardware or in the documentation, because the kit is supposed to include everything required.

To simplify things, I modified the CF adapter , removing the bottom power connector and adding only the single required +5 V pin on top.
This reduces the height of the board — which turned out to be necessary for the next step.

I slightly modified the A500 case to fit the CF card reader under the floppy drive , making card swaps extremely convenient without reopening the machine each time.
I began by drilling two holes to mount the reader from the underside of the chassis:

Then I placed the Cf card reader to calibrate the hole needed for the compaq flash to be inserted. It first made some small holes with a drill and I finished the job with a dremel.

Because of the new placement, I needed a longer ribbon cable between the CF adapter and the IDE controller.
I eventually took the required +5 V for the adapter from the floppy drive connector — cleaner and more reliable.

Finaly I added a red led to monitor IDE drive activity in addition to the floppy drive. In fact I used two 3mm leds glued between the two original ones of the Amiga 500. the mod is fully reversible. I used some aluminium adhesive to both isolate power led from the red light and better diffuse the red light on the original drive led. As you can see, there is one resistor for both leds.

Finally, I added a red LED to monitor IDE activity , complementing the original floppy LED.
I used two 3 mm LEDs glued between the Amiga’s two original indicators.
The mod is fully reversible .
I used aluminum adhesive tape to prevent the power LED from bleeding into the IDE LED, and to better diffuse the red light through the original light pipe.
A single resistor drives both LEDs.

The result looks great and gives clear feedback: IDE activity, floppy activity, or both at once.

On the left: no IDE or floppy activity — on the right: IDE activity.

Now: left = floppy only — right = both IDE and floppy working simultaneously:

With the hardware restored and the extensions fully installed, it was finally time to move on to the next step: preparing the operating system.

Preparing OS Install

Amiberry configuration

To make the installation process easier, I prepared the system using Amiberry first. I used a Kickstart 1.3 ROM patched with IDE controller support , similar to the physical ROM I purchased from Amigastore.eu. The version I used can be found here: https://www.uprough.net/releases/Amiga_Roots_Music_Tools_Beta_2/

Below are the Amiberry settings I used to replicate my Amiga 500 hardware as closely as possible :

  • CPU: 68000, 24-bit addressing, A500 cycle-exact (×2)
  • Chipset: ECS Agnus with A600 selected under “Chipset Extra” — this is important, otherwise the IDE controller will be disabled
  • RAM: Same as my real A500 — 1 MB Chip, 8 MB Z2 Fast
  • Expansion: Enabled the A600 IDE controller
  • Hard Drive: Mapped the Linux device corresponding to my USB CF card reader, selected the Commodore A600 controller , and set the mode to ATA-1 — this is essential, or the CF card won’t be detected correctly

These settings allow Amiberry to behave almost exactly like the upgraded A500 hardware, making the OS installation straightforward and 100% compatible with the real machine.

HDToolsBox

Nothing particularly unusual here, except that I first had to free some space on the “IDE Setup” floppy (I honestly don’t remember where I originally got it). Without doing so, HDToolBox refused to save the new drive-type definition.
To make room, I simply removed the Shell program from that floppy, since it’s already available on the Workbench disk anyway.

Once that was sorted out, here’s what I essentially did:

  • Edited the ToolTypes of HDToolBox to point to scsi . device
  • Launched HDToolBox .
  • Selected the CF drive and clicked “Change Drive Type” .
  • Created a new drive type definition .
  • Set the Manufacturer , Drive Name , and Revision fields.
  • Saved and selected this newly created drive type.

These steps allow HDToolBox to correctly recognize and handle the CF card as a proper fixed drive under Workbench.

Partitions

Below is the partitioning scheme I chose. I generally prefer to separate the operating system , its accompanying utilities, applications, games, and user data — essentially the Amiga equivalent of a “/home” directory.

  • DH0 – Workbench : 24 MB, 100 buffers
  • DH1 – Apps : 85 MB, 100 buffers
  • DH2 – Games : 140 MB, 100 buffers
  • DH3 – Data : 212 MB, 150 buffers

For all partitions, I used FFS with a maxtransfer value of 0x1FE00 .
I formatted each partition using a command like:

format DRIVE DH0 name Workbench FFS QUICK

Workbench 1.3 install

Installing Workbench 1.3 is fairly straightforward: it simply involves copying the contents of the Workbench and Extra disks onto the bootable partition, then editing the startup-sequence .
I later discovered that the A590 Install disk actually includes a dedicated tool for installing Workbench — but here’s the manual method I followed:

copy workbench1 . 3 : to dh0: all clone

copy "extra 1.3:" to dh0: all done

rename DH0: s / startup-sequence DH0: s / startup-sequence . FD

rename DH0: s / startup-sequence . HD DH0: s / startup-sequence

edit DH0: s / startup-sequence ; replace the call "Execute s:Startup-sequence" by "Execute s:Startup-sequence.FD"

I also copied HDToolBox from the “IDE Setup” disk into DH0:/Tools for convenience.

After removing all floppy disks and resetting the virtual machine, the Amiga immediately booted from the hard drive.
Before applying any customisations, I confirmed that everything worked properly on the real hardware .

Here’s the Workbench 1.3 booting directly from the CF card:

Installed Software

In this chapter, I’m going to give an overview of all the software I installed on the A500, along with their sources — and no, it’s not always from Aminet.net!

Sources

Where I got it

Before diving into the software itself, here’s a quick overview of the main sources I used to gather everything described in this chapter

Installed Tools

Software Purpose Source
DiskMaster 2 File manager Archive.org – compilation
Scanned Manual
CygnusED 2.12 Full features tet editor Archive.org – Neck utilities
PowerPacker Compression and tool to read compressed content Aminet
DMS 1.11 Disk imager Aminet
TSGui Graphical interface for ADF and DMS Aminet
LHA 1.38 Amiga’s default archiving tool Aminet
Mostra 1.06 Image viewer ftp.funet.fi
Sysinfo Hardware and system informations Aminet
X-Copy Pro 3.31 Disk copier Archive.org – Neck utilities
SuperDuper Disk copier Aminet / fish-0488
Bootx 4.5 Antivirus Aminet / fish-0641
Latest virus database

Workbench enhancements

Software Purpose Source
ARP 1.3 Better AmigaDos commands Aminet
WShell 2.0 Better shell Archive.org – original software disk
MyMenus Allow to make custom menu entries Aminet / fish-0225
Fkeys Key shortcuts for windows and screen switcher Aminet / fish-0532
Dmouse 1.20 Screen and mouse blanker + windows management Archive.org – Neck utilities
MSClock Clock on title bar Aminet
Setclock v34.3 Y2k patch for setclock Obligement

Applications and games

For applications, I simply installed a few classic programs from the era, mostly for fun . By today’s standards, these tools aren’t particularly productive, but they give a great sense of how software worked back then. All of them were sourced from archives.org and ftp.funet.fi (see the “Sources / Where I Got It” section for links):

  • Deluxe Paint IV
  • Pro-Tracker 3.10 – music editor (https://ftp.funet.fi/pub/amiga/audio/apps/compose/)
  • ANIMagic
  • Brillance 2 : contains commorodre installer for OS 1.3 => copy to C
  • Disney Animation Studio
  • PageSetter 2
  • Wordworth 1.1
  • Scala MM 200

As for games, I only included those that are natively installable on the A500. I didn’t see the point of using JST, since I can rely on WHDLoad on my other Amigas. The games I chose come from my personal list of best Amiga titles , curated over time:

  • Turbo Trax
  • Fiendish freddy
  • Lionheart
  • MetalKombat
  • Ducktales
  • Flashback
  • Hare Raising Havoc
  • Base Jump
  • KidChaos
  • Conan the Cimmerian
  • Dragon Heart
  • BosCar
  • BlackViper
  • MegaTyphoon

Configuration & customizations

This section describes the steps I followed to customize my A500, presented roughly in the order I tackled them. Some steps are explained in more detail than others, depending on the level of customization involved.
Basically, I followed an order that allowed me to set up a fully usable environment before diving into more advanced tweaks.

Bare minimum for a usable OS

A file manager with OS 1.3 feeling

First, I installed DiskMaster 2 — a must-have if you want a proper file manager on base Workbench 1.3, which can’t even display files and directories that have no associated icons.

Here’s what I did to set it up:

  • Copied the executable to SYS: c / dm .
  • Created a setup file named dm . conf in SYS: s with the following customizations:
    • SetFormat "NS T DMY A" to remove unnecessary comments from the file list
    • Barformat "DiskMaster Chip:%C Fast:%F %T %D.%M"
  • Optimized window dimensions
  • Added a Version command : AddCmd Version , 20, extern c: version % s ; Wait 2
  • Added a PlayMod command
  • Customized the Editors menu
  • Simplified the Archives menu to only LHA + DMS
  • Simplified the Tools menu and added Execute script
  • Simplified the Project menu

To launch DiskMaster, I run: dm s: dm . conf either from the shell or via a custom menu, as explained later.

Below is the full configuration file for reference:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

Reset

AddMenu Project, Printer Setup, SetPrinter

AddMenu Project, Change Command, ChgCmd

AddMenu Project, Save Config, S, SaveConfig

AddMenu Project, About, About

AddMenu Project, Quit , Q, Confirm "Are you sure you want to quit?" Yes No ;Quit

AddMenu Tools, Run Selected, Single ;External run %s

AddMenu Tools, Execute Script, Single ;External run Execute %s

AddMenu Tools, Swap S < - > D, Swap

AddMenu Tools, Run DM Script, Single ;Batch %s

AddMenu Archives, Lha Add, StdIO "CON:0/12/640/100/Add Window" ;Extern "Lha <* -r a";StdIO CLOSE

AddMenu Archives, Lha Extract, StdIO "CON:0/12/640/100/Extract Window" ;Extern Lha <* x %s;StdIO CLOSE

AddMenu Archives, Lha List , StdIO "CON:0/12/640/160/List Window" ;Extern Lha v %s;Wait;StdIO CLOSE

AddMenu Archives, DMS Write, StdIO "CON:0/12/640/160/List Window" ;Extern DMS write %s TO DF0:;Wait;StdIO CLOSE

AddMenu Disk, Format , Format

AddMenu Disk, DiskCopy , DiskCopy

AddMenu Disk, Format DF0: , Confirm "Are you sure?" ;Format DF0:

AddMenu Disk, Format DF1: , Format DF1: VERIFY "WorkDisk"

AddMenu Disk, Clear DF0: , Format DF0: QUICK INSTALL VERIFY

AddMenu Disk, Copy DF0: DF0: , DiskCopy DF0: DF0:

AddMenu Disk, Copy DF0: DF1: , DiskCopy DF0: DF1:

AddMenu Control, Lock as Source, Lock S

AddMenu Control, Lock as Dest, Lock D

AddMenu Control, UnLock, UnLock

AddMenu Control, UnLock all, UnLock All

AddMenu Control, Toggle Expand, Expand

AddMenu Editors, Textra, T, Extern run Textra % s

AddMenu Editors, CygnusED, T, Extern run Sys: Utilities / CygnusED % s

AddMenu Editors, EditPad, T, Extern run Sys: Utilities / Notepad % s

Button "Parent"

SetFormat "NS T DMY A"

BarFormat "DiskMaster     Chip:%C   Fast:%F      %T  %D.%M"

TitleFormat "%B/%F %I/%C"

OpenScreen 2

Color 05A FFF 002 F80

Font topaz / 8

OpenWindow 278 11 84 245 CMD

AddCmd Root, 10, Root

AddCmd Parent, 10, Parent

AddCmd All, 30, Select *

AddCmd Clear, 30, Deselect *

AddCmd Select, 30, Select

AddCmd Exclude, 30, DeSelect

AddCmd Copy , 20, ReqPattern ;Copy %s %d

AddCmd Cp New, 20, Copy % s % d NEWER

AddCmd Move, 20, ReqPattern ;Move %s %d

AddCmd Delete , 30, ReqPattern ;Confirm "All selected files will be lost.";Delete %s

AddCmd Rename , 20, Recurse OFF ;Rename %s

AddCmd Protect , 20, Recurse OFF ;Protect %s

AddCmd Comment, 20, Recurse OFF ;Comment %s

AddCmd Find, 20, ReqPattern "Please enter search pattern" ;Find %s

AddCmd Read, 20, Read % s

AddCmd HexRead, 20, Read % s HEX

AddCmd ShowPic, 20, ShowPic % s

AddCmd MakeDir , 20, MakeDir

AddCmd Size, 20, UnMark OFF ;Check %s

AddCmd Version , 20, extern c: version % s ; Wait 2

AddCmd Playmod, 20, extern run APPS: Protracker / backplay % s

OpenWindow 362 11 278 245

OpenWindow 0 11 278 245

AddAutoCmd FORM ? ? ? ? ILBM ,ShowPic % s

AddAutoCmd FORM ? ? ? ? ACBM,ShowPic % s

AddAutoCmd FORM ? ? ? ? 8SVX ,ShowPic % s

AddAutoCmd FORM ? ? ? ? ANIM ,Extern View % s

AddAutoCmd ? ? - lh,StdIO "CON:0/12/640/100/Extract Window" ;Extern Lha <* x %s;StdIO CLOSE

AddAutoCmd TEXT,Read % s

AddAutoCmd DEFAULT,Read % s HEX

Text editors

Once you have a proper file manager, the next thing you’ll do most often while configuring and customizing Workbench 1.3 is editing configuration files . For this reason, I installed two excellent text editors — far superior to the default NotePad or ED .

  • Textra : Installed to SYS: c as a lightweight but powerful editor for quick edits and rapid file changes.

  • CygnusED : Installed to SYS: Utilities , with req . library placed in SYS: libs , providing a full-featured, professional editor for more complex tasks (albeit heavier).

Both editors complement each other: Textra for speed, CygnusED for advanced editing.

Better shell

If, like me, you’re used to Bash or Zsh , the original Amiga Shell — even in the 3.x releases — feels quite limited, missing some “basic” features we take for granted. Fortunately, two tools make the CLI interface far more user-friendly: ARP 1.3 and WShell .

For ARP , I simply followed the installer and opted not to install the ARP shell , keeping the setup minimal.

WShell , on the other hand, comes with an installer that can be run directly from the CLI: Wshell - install

It doesn’t create an icon, so it’s invisible from Workbench by default. I made several customizations to integrate it better:

  • Copied NewWSH to the Workbench partition , allowing WShell to be started via an icon.
  • Set the ToolTypes as follows:

CONSOLE= CON: 20 / 15 / 580 / 150 / WShell / CLOSE

FROM= S: WShell - Startup

NAME=WShell

  • Configured the default shell window in S: ENV / shellwindow : CON: 20 / 15 / 580 / 150 / WShell / CLOSE

I also tweaked the FComp configuration ( SYS: s / Config - Fcomp ) to get more familiar key usage:

  • TAB for autocomplete instead of ESC
  • Arrow keys Up / Down for line navigation
  • HOME / END for session top/bottom
  • PAGE UP / DOWN for session page up/down

Here is the full configuration file for reference:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

; Options record: SORT/S,GROUP/S,NOPATH/S,NOTOOLTYPES/S

OPTIONS nopath ; notooltypes

; FILETYPE records: `FILETYPE/K,FMT/K,REMOVE/S'

FILETYPE TEXT FMT "E %a"

FILETYPE ILBM FMT "sys:utilities/display %a*N"

FILETYPE DOC FMT "sys:utilities/more %a*N"

; Command records: `COMMAND/K,PATH/K,PAT/K,FMT/K,REMOVE/S'

COMMAND EXecute PATH S: ; an argument PATH

COMMAND DELete FMT "%f%0 %1 %2 %3%l" ; multiple files

COMMAND REName FMT "REName FROM %0 TO %0" ; command-specific rewrite

COMMAND tex PAT "#?.tex"

COMMAND DVisw PAT "#?.dvi" FMT "%f%r0%l"

COMMAND wait FMT "You're waiting ... %0" ; input context example

COMMAND VERsion PATH "libs:,devs:"

; Hotkeys: `KEY/K,QUAL/K,PATH/K,PAT/K,FMT/K,AUTO/S,REMOVE/S'

KEY 66 QUAL 0 ; TAB key for completion

KEY 29 FMT ";Choices: %0 %1 %2 %3 %4 %5 %6 %7 %8 %9"

; Input keys: `KEY/K,QUAL/K,NAME/K,PATH/K,PAT/K,FMT/K,AUTO/S,REMOVE/S'

KEY 76 QUAL 8 NAME CTRL - UARROW FMT "*E[101]" ; search up

KEY 77 QUAL 8 NAME CTRL - RARROW FMT "*E[100]" ; search down

KEY 62 QUAL 0 NAME KPUARROW FMT "*E[103]" ; line up

KEY 30 QUAL 0 NAME KPDARROW FMT "*E[102]" ; line down

KEY 31 QUAL 0 NAME PGUP FMT "*E[113]" ; page up

KEY 63 QUAL 0 NAME PGDOWN FMT "*E[112]" ; page down

KEY 61 QUAL 0 NAME HOME FMT "*E[99]" ; session top

KEY 29 QUAL 0 NAME END FMT "*E[98]" ; session bottom

KEY 79 QUAL 16 NAME LALT - LARROW FMT "*E[79]" ; skip left name  alt-control-O

KEY 78 QUAL 16 NAME LALT - RARROW FMT "*E[73]" ; skip right name alt-control-I

KEY 79 QUAL 8 NAME CTRL - LARROW FMT "*E[85]" ; del left name   alt-control-U

KEY 78 QUAL 8 NAME CTRL - RARROW FMT "*E[89]" ; del right name  alt-control-Y

I also customized the WShell prompt in S: WShell - Startup to make it more informative and visually clear: the time is displayed between brackets in black (color 2) , followed by the current path in orange (color 3) .

Here is the content of SYS / s: WShell - Startup :

Additionally, I modified SYS: / s: ENV / titlebar to display the shell number , free fast memory , and free chip memory :

% w % n - % mc chip / % mf fast

Finally, I set WShell as the default CLI by adding it somewhere in the startup-sequence .
The extract below for reference:

; WShell

assign remove CON: ; is replaced by the next line

C: DHOpts CON: PIP: ; set the new display handler

C: FComp ; enable completion and history navigation

C: SetExecute ; use wshell for Execute command

Custom menu for quick access to most usefull tools

The final touch for a more usable Workbench 1.3 is customizing the system menu to include shortcuts to the most frequently used tools , such as DiskMaster, Textra, and NewShell.

To achieve this, I installed MyMenu following the official documentation:

  • Copied the main program to SYS: C .
  • Copied MyMenu . conf to SYS: S and configured it according to my preferences.
  • Copied MyMenu-Handler to SYS: L
  • Called MyMenu in the startup-sequence , right after LoadWB .

The full configuration file is as follows:

color 2

menu < D > Tools DiskMaster | CLI SYS: c / dm SYS: s / dm . conf

menu < S > Tools NewShell | WB SYS: NewWSH

menu < B > Tools BootX | WB SYS: System / bootx

menu < T > Tools Textra | CLI SYS: c / Textra

menu < S > Tools CygnusED | WB SYS: Utilities / CygnusED

menu < A > Floppy Dms - Adf | WB SYS: tools / tsgui

menu < D > Floppy SuperDuper | WB SYS: tools / SD

menu < X > Floppy X - Copy | CLI SYS: c / xCopy

Workbench enhancements & tools

The following software is not strictly necessary, but each clearly enhances the Workbench 1.3 experience . They are easy to install, require little to no configuration, and bring useful improvements to everyday use. I’ll go quickly through them:

Software Purpose Comment
FKeys keyboard shortcuts to switch between Windows and screen Copied to a new Commodities drawer on SYS: and run from the startup-sequence.
Dmouse Fine-tuned mouse accelerator and screen blanker Executable to SYS: C , handler to SYS: L launched via startup-sequence: dmouse - a1 - t0 - A0
Msclock Displays the time on the menu bar Same installation logic as DMouse: executable to SYS: C , handler to SYS: L , then run from startup-sequence: msclock - d - m - o .
PPnew Powerpacker tools & libraries (required for some packed programs and mods) Copied PPMore / powerpacker . library to SYS: libs , pp and PPMore to SYS: C , PPMore . doc to SYS: docs , same for ppShow and ppAnim
LHA Default file archiver on AmigaOS Ran LHA_e138 . run to extract files, then copied lha , splitlzh , and joinlzh to SYS: C
BootX Up-to-date antivirus for OS 1.3 On my setup it crashes often, but it can detect viruses in memory, bootblocks, floppies, and files. Installation: libs / reqtools . library . 13 to SYS: libs , all BootX files to SYS: system , BootX . doc to SYS: docs , latest recognition file to SYS: system . Adjusted colors for a Workbench 1.3 look: color1=blue 05A , color2=white FFF , color3=black 002 , color4=orange F80 .
Setclock v34.3 Y2K-compatible clock for OS 1.3 Prevents year misinterpretation (e.g., 2000=1979).
Mostra 1.08 Image viewer Copied to SYS: Utilities

These tools improve daily usability, add visual polish, and ensure compatibility with classic file formats and archives.

Floppy disk Tools

Even though I can manipulate Amiga floppies on my other machines, sometimes it’s quicker to work directly on the A500 when it’s connected. The following software makes floppy management much easier:

Software Purpose Comment
X-Copy Well-known floppy disk copier Copied to SYS: C
DMS amd TSGui Floppy disk (un)archiver and associated GUI Ran dms1111 . run to extract DMS, and unlha for the TSGui archive. Then copied: dms to SYS: C , DMS . doc to SYS: docs , tsgui to SYS: Tools
SuperDuper Another floppy disk copier Copied sd to SYS: Tools and documentation to SYS: Docs .

Tested but removed

I also tried installing and using some other interesting tools and hacks, but ultimately removed them because they caused crashes or unexpected behavior on my setup:

  • ZoomDaemon : Adds a “new window” widget to minimize windows. However, it also displayed this for invisible Workbench windows, which looked awkward — and it caused frequent crashes. At least my system is stable again without it.
  • NoClick2 : Ran fine in Amiberry/UAE, but crashed on the real Amiga 500 .
  • SimGen + RunBack : Fun for adding backdrop pictures, but it led to unexpected and frequent Guru Meditation errors .

Sometimes, stability wins over flashy features, especially when working with a vintage machine like the A500.

Startup-sequence

It’s now time to share my startup-sequence . Of course, everyone has their own rules and preferences, so I’m simply presenting mine as an example.

My approach was guided by three main goals:

  1. Simplify the default OS 1.3 naming conventions : instead of juggling startup-sequence , startup-sequence . FD , and startupII .
  2. Consolidate everything related to my base but customized Workbench into a single file for easier maintenance.
  3. Create a user-startup , similar to OS 2.0+, mainly to handle application-specific assigns and personal tweaks.

The full startup-sequence file is provided below for reference:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

SetPatch > NIL:

SetPatchMrgCop > NIL:

SYS: System / FastMemFirst

SetClock load

Addbuffers df0: 30

; faster text rendition

FF > NIL: - 0

; preload most used commands

resident c: Resident pure

resident c: List pure

resident c: CD pure

resident c: Mount pure

resident c: Assign pure

resident c: Makedir pure

resident c: dir pure

resident CLI L: Shell - Seg SYSTEM pure add ; activate Shell

; assign

assign sys: dh0:

assign c: SYS: c

assign L: SYS: l

assign FONTS: SYS: fonts

assign CGFONTS: SYS: CGFonts

assign CGCACHE: SYS: CGFonts / CGCache

assign S: SYS: s

assign DEVS: SYS: devs

assign LIBS: SYS: libs

; Ramdisk config

makedir ram: t

makedir ram: env

makedir ram: clipboards

assign t: ram: t

assign ENV: ram: env

assign CLIPS: ram: clipboards

copy S: env / ENV: QUIET

copy S: Ramdisk . info ram: Disk . info

copy S: ram . info ram: . info

; Mounts

mount speak:

mount aux:

mount pipe:

; WShell

assign remove CON: ; is replaced by the next line

C: DHOpts CON: PIP: ; set the new displaz handler

C: FComp ; enable completion and history navigation

C: SetExecute ; use wshell for Execute command

; set keymap

SYS: System / SetMap F

;set path for Workbench

path ram: c: sys: utilities sys: system s: sys: prefs add

C: dmouse > NIL: - a1 - t0 - A0

C: msclock > NIL: - d - m - o

SYS: commodities / Fkeys > NIL:

execute s: user-startup

; load workbench

LoadWB delay

C: MyMenu

endcli

ScreenShots

To produce cleaner and more polished screenshots, I captured them using Amiberry / UAE rather than the real hardware.

This allows for crisp images that clearly show the Workbench, tools, and customizations without the glare or color inconsistencies that sometimes appear on a CRT display.

Below are several examples illustrating my setup and configurations:

See it live on real hardware

If you want to see the fully restored and customized Amiga 500 in action, here’s a video showing it running on the real hardware . It demonstrates the Workbench, tools, and all the tweaks described in this article.

The Cities Skylines Paradox: how the sequel stumbled

Hacker News
selix.net
2025-11-19 12:02:15
Comments...
Original Article

Why the sequel stumbled
and how a new studio might save it

Announcement: “An Update on Cities: Skylines II” (Paradox Interactive).

In mid-November 2025 Paradox Interactive and its long-time partner Colossal Order announced a quiet but monumental shift. After more than fifteen years together, the companies would “pursue independent paths”. The Cities: Skylines franchise – Paradox’s flagship city-building series – would be handed to Iceflake Studios, an internal Finnish team. Colossal Order (CO) would finish one last “Bike Patch” and an asset-editing beta, then move on to other projects. The announcement formalised a split that players and critics had anticipated for months. Cities: Skylines II (CS2) had launched in October 2023 to technical issues, design missteps and a conspicuous lack of mod support. A year later, many of those problems persisted, and Paradox’s patience wore thin.

In this article I attempt to disentangle the facts of that breakup, to understand why CO floundered, why Iceflake has been given the keys, and whether the sequel’s underlying issues can realistically be fixed.

A skyline transitions from summer to winter

A brief history of the series

Cities: Skylines (2015) emerged from the rubble of Maxis’ SimCity reboot, combining approachable city-planning mechanics with modding openness. Developed by the Helsinki-based Colossal Order and published by Paradox Interactive, CS1 quickly became the dominant city builder. Its success spawned dozens of expansions and thousands of user-made mods via Steam Workshop. CO – a studio of around thirty people – became a darling of the simulation genre.

Technical sources: Launch performance warning (GameSpot); CS2 performance analysis (Paavo Huhtala).

In 2023 CO attempted to leap ahead with a sequel. Built in Unity’s High Definition Render Pipeline (HDRP) and promising per-citizen simulation, a dynamic economy and cross-platform modding, CS2 launched on PC in October 2023. Even before release, Paradox warned that performance might not meet players’ expectations. The warning was prescient: the game shipped with heavy GPU bottlenecks, slow simulation speeds and a bare-bones economy. An autopsy by developer Paavo Huhtala found that every pedestrian model had 6,000 vertices (complete with fully modelled teeth) and that props such as pallet stacks were rendered in full detail even when invisible. The engine lacked occlusion culling and relied on high-resolution shadow maps, causing “an innumerable number of draw calls”. The result was a city builder that taxed even high-end GPUs while leaving CPU cores idle.

Player critique: “One Year Later – Cities: Skylines II Is Still a Broken, Lifeless Mess” (Paradox Plaza forums).

Alongside the rendering problems were deeper simulation issues. A year after release one forum thread titled “One Year Later – Cities: Skylines II Is Still a Broken, Lifeless Mess” complained of mindless citizens, dead public spaces and traffic AI that took nonsensical routes. The poster wrote that the sequel’s touted dynamic economy was “nonexistent”. Such criticisms weren’t isolated; they reflected a broader perception that CS2 had shipped as an unfinished Early Access game. CO acknowledged the problems and postponed the console release and paid DLC to focus on patches. Despite multiple updates, players still reported simulation slow-downs and path-finding issues in 2024 and 2025.

Modding coverage: Paradox Mods FAQ (Shacknews); Hallikainen on missing mod support (Game Rant).

Modding – a pillar of the first game – was largely absent. Paradox and CO announced that, unlike CS1’s open Steam Workshop, CS2 would use Paradox Mods , a centralised platform to ensure cross-platform compatibility. In October 2023 Shacknews quoted an official FAQ explaining that mods would be “confined in official capacity to the Paradox Mods platform” because the publisher wanted a single hub accessible on both PC and console. The FAQ went further: “We won’t support other platforms such as Steam Workshop”. This business decision frustrated PC modders and delayed many of the quality-of-life fixes that CS1 had enjoyed through community mods. In February 2024, CO CEO Mariina Hallikainen admitted that the team’s “biggest regret” was launching without mod support; Gamerant summarised her comments, noting that she acknowledged community frustration over the missing Editor and inadequate mod tools.

The facts of the change

Paradox’s November 17 2025 update sets out the formal arrangements. The post states that Paradox and Colossal Order “mutually decided to pursue independent paths” and that the decision was taken “thoughtfully and in the interest of both teams”. The Cities: Skylines franchise will move to Iceflake Studios, one of Paradox’s internal management-game teams based in Tampere, Finland. Iceflake will take over “all existing and future development” of CS2, including free updates, ongoing work on the in-game Editor and console versions, and future expansions. CO will deliver one final update, colloquially called the Bike Patch , adding bicycle infrastructure, Old Town buildings and bug fixes. A beta of the asset-editing tools will be released before year-end, after which Iceflake will assume full development duties from the start of 2026.

Statements from the principals frame the split as amicable. Hallikainen thanked Paradox for fifteen years of collaboration and said CO was “excited to channel our experience, creativity, and passion into new projects”. Paradox deputy CEO Mattias Lilja expressed gratitude for CO’s achievements and emphasised Paradox’s commitment to “provide [Cities players] with more content and new experiences”. Iceflake studio manager Lasse Liljedahl called taking the reins “an immense honor and a great responsibility” and said the team sees “a strong foundation and so much potential waiting to be unleashed”. Together, the statements project optimism: the old guard departs gracefully, the publisher pledges continued support, and a new studio vows to unlock the game’s latent promise.

On launching early: Paradox on Cities: Skylines II and iteration regrets (Kotaku).

Yet hidden between the lines is a tacit admission of failure. In an October 2024 interview discussed in Kotaku, Lilja conceded that launching CS2 in October 2023 was a mistake, saying that Paradox and CO were “actually in agreement that iterating this live was probably the right way to go” but that, in hindsight, they “should probably not launch that early”. In other words, the game was knowingly released unfinished with the hope that post-launch patches would complete it; the strategy backfired. By late 2025 the sequel remained tarnished, and shifting development to an internal studio gave Paradox a way to reframe the narrative without cancelling the project.

Why Colossal Order faltered

Several interlocking factors contributed to CO’s struggles with CS2.

Technical overreach

The team aimed high: a next-generation city builder with per-citizen simulation, realistic economies and cinematic visuals. But CO was still a 30-person studio – tiny by AAA standards – and Unity HDRP proved unforgiving. The engine’s GPU bottlenecks weren’t the result of exotic path-tracing but of ordinary models being rendered at absurd detail. Buildings and props lacked lower-detail meshes and proper occlusion culling, so millions of polygons were drawn even when off-screen. Shadows were computed at high resolution for every object. These problems could theoretically be solved through asset rework and rendering optimisations, but doing so required months of drudge work and careful pipeline changes – hard tasks for a small team already firefighting bugs.

On the simulation side, CO promised a dynamic economy and deep agent-based behaviours, but the implementation lagged behind the ambition. Players complained that citizens moved like drones, parks were empty and emergency services were purely decorative. Traffic AI took nonsensical routes, and public transport usage barely affected congestion. Economic interactions between industries were shallow, and the employment model produced bizarre labour shortages or surpluses. Fixing such systemic issues often requires redesign rather than quick patches; CO did release an Economy 2.0 update in mid-2024, but by the time of the split the simulation still felt off.

Management and business constraints

CO was simultaneously developing the PC release, console ports and multiple DLCs while also building an entirely new modding platform. Paradox’s decision to use Paradox Mods for cross-platform compatibility meant that CO had to engineer modding tools that worked on PC, Xbox and PlayStation while meeting console platform security requirements. As the Shacknews article notes, Paradox and CO confirmed that mods would be “confined in official capacity to the Paradox Mods platform” and that there would be no official support for Steam Workshop or Nexus Mods. The rationale was to provide a “centralized, cross-platform hub”, but it removed the de-facto modding infrastructure that had empowered CS1. Building a secure, cross-platform modding system is a multi-year effort; CO underestimated the work and ended up shipping the game without modding tools at all. Hallikainen later called this omission their “biggest regret”.

At the same time, Paradox wanted a steady flow of revenue from DLC and console versions. Lilja’s comments reveal that the publisher deliberately chose to release early and iterate publicly. That strategy might work for small indie games, but CS2’s player base expected a polished sequel, and paying customers became unwilling beta testers. Patches that fixed one issue often introduced new bugs, and repeated delays of the console release eroded trust.

Human factors

CO’s team had been working on city-builders for over a decade. Burnout and fatigue likely played a role. The company’s history is entwined with the Cities series; moving on allows them to avoid being perpetually defined by “the team that broke Cities” and to experiment with new projects. Their public statements emphasise gratitude and optimism, suggesting that leaving the franchise was as much a relief as a dismissal.

Why Iceflake might succeed

Past work: Surviving the Aftermath review (Screen Rant).

Iceflake Studios isn’t a household name, but it has relevant experience. Founded in 2007 and acquired by Paradox in 2020, Iceflake developed Surviving the Aftermath , a post-apocalyptic colony-builder that entered early access in 2019 and reached full release in November 2021. Screen Rant’s review described it as an “entertaining city-building game” and praised its blend of survival mechanics and management. The game launched rough in early access but steadily improved; by 1.0 it was viewed as “mixed or average” by Metacritic (around 69/100) and maintained a consistent player base. Unlike CS2, its challenges stemmed more from content depth and pacing than from catastrophic performance problems. Iceflake therefore has experience iterating a complex simulation into a stable product.

As an internal studio, Iceflake is directly accountable to Paradox. The publisher can allocate more resources, embed technical specialists and control the roadmap more closely than with an external partner. Iceflake also inherits CS2’s source code, toolchain and documentation. Without the emotional investment that CO had, Iceflake may be more willing to prune systems, simplify mechanics and cut features that don’t work. Liljedahl emphasised that Iceflake sees “a strong foundation and so much potential waiting to be unleashed”. The foundation isn’t nothing: CS2 has larger maps, improved road tools, realistic topography and flexible zoning. If Iceflake can optimise assets, implement proper level-of-detail and occlusion culling and iteratively rework the simulation, the game could reach a state where it’s enjoyable for mainstream players.

However, expectations must be managed. Iceflake cannot rewrite the engine from scratch. The Unity/HDRP foundation, the cross-platform modding constraints and many of the simulation patterns are baked in. The studio will likely focus on performance optimisation, bug fixing and incremental economy/traffic improvements rather than grand redesigns. The Paradox Mods platform will remain the only officially supported mod hub, so deep code mods akin to CS1’s may never return. That’s a business decision that Iceflake cannot overturn.

Paradox’s course correction

The publisher’s response to CS2’s troubled launch reveals a broader shift within Paradox. Kotaku’s October 2024 piece notes that Paradox executives have been on an “apology tour” addressing missteps across several projects, including Bloodlines 2, Prison Architect 2 and the cancelled Life By You. Lilja admitted to PC Gamer that they misjudged hardware compatibility and that releasing early was a misstep. By moving CS2 to an internal studio, Paradox signals a desire to control timelines, budgets and quality more tightly. It mirrors similar decisions: Paradox previously shifted development of Bloodlines 2 to a new studio and delayed Prison Architect 2 indefinitely due to technical problems. The company appears to be prioritising quality over rushing sequels out the door.

Paradox has also been transparent about what the short-term roadmap entails: the Bike Patch, asset-mod beta and ongoing console work. After Iceflake takes over, the studio will share its own plans. The messaging emphasises continuity rather than abandonment. There’s no talk of a Cities: Skylines III , and Paradox continues to encourage players to connect their Paradox accounts for cosmetic rewards. Whether this rebuilds trust depends on execution.


Ultimately, Cities: Skylines II is a cautionary tale of ambition outrunning capacity. Colossal Order set out to deliver the most realistic, detailed city-builder ever made but underestimated the technical and design challenges. A small team built an engine that rendered thousands of hidden vertices, shipped without proper mod support and relied on patches to finish the simulation. Paradox, eager to capitalise on the success of CS1, allowed an unfinished game to launch, hoping to “iterate live”. Players rightly rebelled. A year later the sequel still feels unfinished, and the publisher has handed the project to an internal studio while letting the original creators bow out gracefully.

Does this mean CS2 is doomed? Not necessarily. Iceflake inherits a game with a solid core and a passionate community. The studio’s history with Surviving the Aftermath shows it can shepherd a complex management game from rough early access to a polished release. Paradox’s decision to move development in-house suggests a willingness to allocate resources and accept delays. Significant performance fixes – better LODs, occlusion culling, asset optimisation – are engineering tasks that can be accomplished over time. Simulation adjustments to traffic and economy are harder but not impossible. What CS2 will never become is CS1 with all the modding freedom; the Paradox Mods platform and console parity goals make that clear. For players willing to accept that constraint, there is still hope that Iceflake can turn CS2 into a stable, satisfying city builder. The road will be long, but at least the car is now being driven by a team that isn’t running on fumes.

Learning to Boot from PXE

Hacker News
blog.imraniqbal.org
2025-11-19 11:18:59
Comments...
Original Article

Posted on

I bought a new laptop, the GPD Pocket 4. It came with windows installed by default, and I wanted to install nix on it.

I grabbed a usb, dd 'd the nixos iso image on it and tried to boot. The laptop did not recognize the drive. Turns out, the drive crapped out, no computer would boot off it.

The normal thing to do would've been to just go get a new usb and install off of and go about setting the laptop up. That meant I would either have to go outside or wait for a new usb to arrive. I don't want to outside and I don't want to wait to setup my laptop. I have free time now and I have no clue when I will have free time next.

The menu had two other boot options. Something about PXE over ipv4 or ipv6. I only knew that PXE allowed networked boot. So hey, let's use this time to learn something new.

# DHCP

As I've learned, the first half of this process is DHCP. When a device is connected to the network it sends out a "HEY GIVE ME AN IP" message (I don't actually know how it works and didn't bother to look it up). Then your DHCP service see's this message and responds back with an IP. As part of these requests the client and server can set "options" on these requests which can send additional information. I don't know what the client sets first, but I do know the server needs to set a boot file name and location of a TFTP server. TFTP sort of like FTP.

PXE reads the boot file (usually something.pxe) from the TFTP server and then executes its code. Other boot files are then retrieved as needed from the TFTP server.

While learning this, folks on the internet dont seem too fond of TFTP, saying it could be slow. There exists iPXE which is supposed to be a better PXE. PXE (like bioses), tend to be manufacturer specific and are not created equal. iPXE tries to be better and supports a bunch of other stuff like (like booting from an ISO, and talking in HTTP). So if this all goes well i get iPXE going, point it to the iso I've already downloaded and I'm off to the races!

Spoiler alert, I didn't get to the races.

To get iPXE running, the iPXE.pxe executable needs to be served by TFTP. I am running an OPNsense box for my router/firewall and it as enough disk space and ram that I should be able to do this whole process of it. Setting the DHCP stuff is easy enough via the UI. the iPXE client sets a client option on its DHCP requests, so you want to create a tag in OPNsense off it's user-class (iPXE) and respond with a DHCP boot (what the tab in the UI is called) value of the http server.

The flow should be:

PXE -> Gets TFTP Address -> Downloads and run iPXE iPXE -> Gets HTTP address -> Does iPXE stuff like run our iso

The DHCP stuff can be done through the UI so it was. The TFTP stuff was not availble the web ui so has to be done through ssh.

# TFTP

This was my first time shelling into a BSD box. After this whole process I was left feeling that (Free)BSD is oddly cozy. I can't explain how or why, but it just does. The login prompt from opnsense, the simple shell prompt (csh?), the man pages, the disk layout, the programs. Like even if I didn't have access to all the new version of tools (nvim / rg vs vim / grep) I still got what I wanted done and it just felt cute and cozy.

Anyway, OPNsense ships with dnsmasq and dnsmasq can also act as a TFTP server. I found this out when trying to search for a TFTP program to install via the UI. I don't know how to enable it, nor did I want to look it up (via the internet), so I just read the man page.

man dnsmasq

Reading the man page was a pleasant experience (or maybe it was just my first time reading something from section 8). It told me exactly what the program could do and how to configure it (just searched for tftp). The conf files were listed in at the bottom, the first being /etc/dnsmasq.conf which did not exist on my system but /usr/local/etc/dnsmasq.conf did.

The first line of that file warns you not to manually edit the file and near the bottom you see the conf-dir option set to /usr/local/etc/dnsmasq.conf.d I saw a README in that conf dir and, doing a cat resulted in this message:

cat /usr/local/etc/dnsmasq.conf.d/README
# Dnsmasq plugin directory:
# Add your *.conf files here, read in alphabetical order

Well sure why not lets do that

vim /usr/local/etc/dnsmasq.conf.d/10-tftp.conf
enable-tftp
tftp-root=/srv/tftp
:x
mkdir -p /srv/tftp
fetch -r https://boot.ipxe.org/ipxe.efi -o /srv/tftp/

I used the web ui to restart dnsmasq, but you can also use configctl to do it via shell. Now when I boot up the laptop I see it load up iPXE but then fail as the http server does not exist. That is progress though, now we just need to serve our iso over http.

One thing to note is that nearly all the instructions online focus on legacy/bios boot. All my devices boot via UEFI (which is why we downloaded the efi above instead of the .kpxe file). There are ways to setup DHCP to respond with the appropriate files for both uefi or bios boot, but I dont care enough. There are also other things that try to simplify this whole process like pixieboot and netboot.xyz but I am not interested in them.

# HTTP

OPNsense runs lighttpd for serving its web ui and I would like to piggy back off it for the iPXE stuff.

The trickest part here was finding out the web ui configuration lives at /usr/local/etc/lighttpd_webgui/ via ps . I had to disable the ssl redirect option from the web ui and instead add it myself to end of my conf file, due how the confs are loaded. I could not think of a different way of getting the 443 port redirect disabled just for the ipxe paths

cat /usr/local/etc/lighttpd_webgui/conf.d/00-ipxe.conf
# Serve /srv/tftp under http://<ip>/ipxe/
alias.url += ( "/ipxe/" => "/srv/tftp/" )
url.redirect += ( "^/ipxe$" => "/ipxe/" )

$SERVER["socket"] == "0.0.0.0:80" {
    ssl.engine = "disable"
    $HTTP["url"] !~ "^/ipxe(?:/|$)" {
        $HTTP["host"] =~ "(.*)" {
            url.redirect = ( "^/(.*)" => "https://%1/$1" )
        }
    }
}

$SERVER["socket"] == "[::]:80" {
    ssl.engine = "disable"
    $HTTP["url"] !~ "^/ipxe(?:/|$)" {
        $HTTP["host"] =~ "(.*)" {
            url.redirect = ( "^/(.*)" => "https://%1/$1" )
        }
    }
}

I started off with a basic boot.ixpe file

#!ipxe
menu Choose an ISO
item nix-minmal NixOS 25.04 Minimal
item nix-gui   NixOS 25.04 GUI
choose target && goto ${target}

:nix-minimal
sanboot http://10.0.0.1/ipxe/nixos-minimal-25.05.812242.3de8f8d73e35-x86_64-linux.iso
goto menu

:nix-gui
sanboot http://10.0.0.1/ipxe/nixos-graphical-25.05.812242.3de8f8d73e35-x86_64-linux.iso
goto menu

And here is what I spoiled eariler, it didnt work.

I would get a boot but then nixos would complain about /mnt/iso or something being missing and failing to go further.

This discussion has better information on why it doesn't work: https://github.com/ipxe/ipxe/discussions/962

# Proper netboot files

So my dreams of network booting off an iso are crushed, so where do I go from here?

Well it turns out the ISO comes with a bootloader, which contains instructions on how to boot a kernel with an initial ram disk (hint this when I learned what initrd means). So can't we do the same? The answer is yes! (or so I think). I didnt try to extract the files out the iso, but use nix's built in netboot image generator which builds the necessary files.

I only had to tweak the generated .ixpe file to include the http urls but everything worked out in the end.

cat netboot.ipxe
#!ipxe
# Use the cmdline variable to allow the user to specify custom kernel params
# when chainloading this script from other iPXE scripts like netboot.xyz
kernel http://10.0.0.1/ipxe/bzImage init=/nix/store/hrgkskx4jqdz4nl3p1f4m1dvrr9b3lij-nixos-system-nixos-kexec-25.11pre708350.gfedcba/init initrd=initrd nohibernate loglevel=4 lsm=landlock,yama,bpf ${cmdline}
initrd http://10.0.0.1/ipxe/initrd
boot

I still wonder if I can extract the files from the graphical installer and boot KDE off the network, but now that the OS is installed my interest has waned. Maybe one day I will revisit

Episode Seven: Dirty Information

Intercept
theintercept.com
2025-11-19 11:00:00
Years before the police killing of Breonna Taylor brought “no-knock” raids into the national spotlight, the NYPD mistakenly raided Alberta Spruill’s home — and literally scared her to death. The post Episode Seven: Dirty Information appeared first on The Intercept....
Original Article

In 2004, New York narcotics officers raided Alberta Spruill’s home, shattering her door and detonating a flash grenade. Spruill, a 57-year-old city worker, went into cardiac arrest and died two hours later. The raid was based on faulty intel from a discredited informant, and the suspect they were searching for was already in custody. Spruill’s death came amid a surge in New York City Police Department raids, which had skyrocketed from 1,400 in the mid-’90s to over 5,000 by the time she was killed, nearly all no-knock.

Despite repeated warnings that these reckless raids would end in tragedy, few listened. This episode of Collateral Damage , hosted by Radley Balko , explores how Spruill’s death catalyzed the political rise of Eric Adams, a young Black NYPD officer who would later become mayor. It also examines how promises of reform quickly faded, and the NYPD returned to business as usual.

Transcript

Radley Balko: On an early spring morning in Harlem, 57-year-old Alberta Spruill was getting ready for work. She had worked for the City of New York for nearly three decades. And at the time, she worked in the personnel office of the Department of Citywide Administrative Services.

Joel Berger: Alberta Spruill was a Black woman, a perfectly innocent person with no criminal record of any kind.

Radley Balko: As Spruill went through her morning routine, a heavily armed team of police officers lined up outside her apartment. Seconds later, they took down her door with a battering ram.

Derek Sells: The police on May 16, 2003, at a little past 6 a.m. broke into Ms. Spruill’s apartment. They knocked the door off its hinges. They threw in a stun grenade, which is a percussion grenade, so that it makes a loud flash and a bang.

C. Virginia Fields: I could only imagine how frightening, terrifying, to be in a situation with your door being knocked down and a grenade being thrown into your space.

Derek Sells: When the police went in, instead of finding some drug den, what they found was a neat, tidy apartment of a older woman who lived alone. By the time they realized their mistake, Ms. Spruill was in pain. She could not catch her breath. She was frightened. The police then got EMS to come to the scene. She was taken to the hospital. And 20 minutes later, she was pronounced dead from cardiac arrest.

Radley Balko: The New York Police Department had raided the wrong apartment. The cops were acting on a tip from an informant who had previously been discredited. And they were using a warrant for a suspect who had already been arrested. They also deployed a flash-bang grenade, a device designed to temporarily blind and deafen anyone nearby.

The police had literally scared Alberta Spruill to death.

Joel Berger: This was the biggest news story in the city at the time. It shocked everybody.

Eric Adams: All of us must be outraged of an innocent 57-year-old woman who was inside her home — all of a sudden being disturbed in such a violent fashion.

Cynthia Howell: We want justice. Of course we want justice. We’re gonna do whatever it takes to get justice for her murder. Because who’s next? It’s gonna be your neighbor or whoever’s neighbor.

Radley Balko: A week later, Ousmane Zongo, a West African immigrant, was also killed by New York City police. Protests erupted around the city.

Seventeen years before the police killing of Breonna Taylor brought “no-knock” raids into the national spotlight, New York City residents were demanding an end to the practice.

Spruill’s death “should have been a wake-up call. It should have been a warning.”

Joel Berger: Spruill was really a watershed. It should have been a wake-up call. It should have been a warning. And instead, it was responded to with just the most perfunctory promises that we all knew perfectly well were not going to be kept over the years.

Kimberlé Crenshaw (#SayHerName: Black Women’s Stories of Police Violence and Public Silence event ):

[Humming]

Alberta Spruill.
Say her name.
Alberta Spruill!
Say her name.
Alberta Spruill!
Say her name.
Alberta Spruill!

[Humming]

Radley Balko: Alberta Spruill went to church frequently. She had a son, and six siblings. She was a unique person with her own life, her own interests, her own family. But her death, and the angry public backlash to it, and the unkept promises for reform from public officials were all too familiar. You could easily swap in the names of numerous other Black women killed in the war on drugs — not just Breonna Taylor, but also Kathryn Johnston, who we covered in our first episode .

There’s also Annie Rae Dixon , shot and killed in a raid by a Texas police officer who had mistakenly fired his gun. Tarika Wilson was killed by an officer in Lima, Ohio, while holding her 1-year-old son. The couple Lillian Weiss and Lloyd Smalley died from smoke inhalation after Minneapolis police mistakenly raided their home and deployed a flash-bang grenade. Lynette Gayle Jackson , Geraldine Townsend , Laquisha Turner — the names go on and on.

C. Virginia Fields: My reaction to the tragic death of Breonna Taylor was, one: Here we go again. What has really changed in all of these years, even though we’re talking different states, different region of the country? Here we go again.

Radley Balko: From The Intercept, this is Collateral Damage.

I’m Radley Balko. I’m an investigative journalist who has been covering the drug war and the criminal justice system for more than 20 years.

The so-called “war on drugs” began as a metaphor to demonstrate the country’s fervent commitment to defeat drug addiction, but the “war” part of that metaphor quickly became all too literal.

When the drug war ramped up in the 1980s and ’90s, it brought helicopters, tanks, and SWAT teams to U.S. neighborhoods. It brought dehumanizing rhetoric and the suspension of basic civil liberties protections.

All wars have collateral damage: the people whose deaths are tragic but deemed necessary for the greater cause. But once the country dehumanized people suspected of using and selling drugs, we became more willing to accept some collateral damage in the drug war. In this modern war on drugs — which dates back more than 50 years to the Nixon administration — the United States has produced laws and policies ensuring that collateral damage isn’t just tolerated, it’s inevitable.

This is Episode 7, Dirty Information: The NYPD’s Shock Tactics and the death of Alberta Spruill.

C. Virginia Fields: I guess I heard about it along with everyone else on the news report. And it was very, very disturbing, the circumstances around it. Where this, what, 57, 59-year-old woman was already dressed to go to work and had been working in her position with the city for over some 29 years. And by all indications, a very, very solid church-going person.

Radley Balko: When C. Virginia Fields found out about the death of Alberta Spruill, she knew the scene of the incident well.

C. Virginia Fields: And I knew many people in that building, being in the political office that I held. And I often would go there for various meetings and political stuff.

Radley Balko: At the time, Fields was Manhattan borough president, essentially the equivalent to being the mayor of Manhattan.

C. Virginia Fields: We immediately connected with some of the people we knew in the building, the president of the association and some other tenants just to get a better sense from them. And we also was in contact with the police commissioner, Ray Kelly, to find out from the police side, what had happened.

Radley Balko: And what happened in that apartment, according to public officials, wasn’t quite matching up with the information that was trickling out.

Cynthia Howell: They sugar-coated it to the press. They didn’t want nobody to know.

Radley Balko: Spruill’s niece, Cynthia Howell, quickly became a spokesperson for the family.

Cynthia Howell: She had a glass table in her apartment. When they threw the bomb in, either it landed there and shards of glass struck her, or either when they went in, they threw her down. That’s the only way we can see fit where she got that broke arm and those gashes in her legs. And we got the pictures to prove it. As well as the autopsy report. So she died brutally.

Christian Covington: If you read the report, it doesn’t even make sense.

Radley Balko: That’s attorney Christian Covington, who helped facilitate a community meeting in Harlem about police brutality a few months after Spruill was killed.

Christian Covington: If you read the report, they make it seem like the police came in, they threw a stun grenade, they picked up Ms. Spruill, called the EMTs, and EMTs came, and everything was fine. And the police department patted her on the back and said, “Have a nice day.”

Radley Balko: One detail that sets Alberta Spruill’s death apart from many others is that the police acknowledged that they had made a mistake. According to authorities, the police apologized to Spruill right away in her apartment, before she went into cardiac arrest. The police commissioner also publicly apologized.

Cynthia Howell: It’s little consolation that they did take responsibility for it because it should’ve never happened. They did respectfully apologize in the news. Mayor Bloomberg attended the funeral.

Michael Bloomberg: On behalf of 8 million people of the city of New York, to you, Alberta’s family, I want to express our heartfelt condolences.

Radley Balko: That’s Mayor Michael Bloomberg, speaking at Spruill’s funeral at the time.

Michael Bloomberg: [applause] I want to assure all of you that Police Commissioner Raymond Kelly, who’s here with me, and I are doing a thorough review of what took place that morning. And we’ll institute better practices for everyone that will ensure that Alberta will not have died in vain. [applause]

Today, we must look at ourselves in the mirror and admit that at least in this case, existing practices failed. Our laws and procedures failed the public. As mayor, I failed to protect someone I was chose to work with. We all failed humanity. An innocent human being was taken from us, and our actions caused it.

Radley Balko: Mayor Bloomberg promised to improve how police operated in the city — to put policies in place to prevent a death like Spruill’s from ever happening again.

Joel Berger: This was in their first year and a half where they wanted to show that they were different from [former Mayor Rudy] Giuliani. The overall atmosphere of it was, “This was horrible. We’re not going to let this happen again. We’re going to change.”

Radley Balko: The problem is that Alberta’s Spruill’s death could have been prevented. The bad policies, shortcuts, and mistakes that caused police to barrel into the wrong apartment? Narcotics officers had been operating this way for a long time in New York. In fact, under previous Mayor Rudy Giuliani, the 30th Precinct in Harlem was notorious for “ operating like gangs ”: breaking down doors without search warrants and stealing money and drugs.

There were ample warnings that unless things changed, someone was going to be killed. No one listened — or at least no one in city government who had the power to do anything about it.

Joel Berger: You would call it a comedy of errors, except it wasn’t a comedy since someone died.

Radley Balko: Joel Berger is a longtime New York civil rights lawyer. He’s been working on police misconduct issues since the 1990s.

Joel Berger: They had the wrong apartment. The informant had given them the wrong place. In fact, the guy they were looking for was actually in custody by the time of the raid. They went in with a percussion device, which was designed to strike fear into the residents. And the poor woman died of a heart attack.

Radley Balko: In the first few months after Spruill’s death, public debate focused on two issues: the use of confidential informants, and the practice of serving no-knock raids to serve drug warrants.

The path that led police to Alberta Spruill’s apartment door that morning had begun months earlier, when police were making a routine street arrest for drugs.

Derek Sells: There was an individual whose name has never been revealed, but who was arrested on a minor trespassing offense.

Radley Balko: Attorney Derek Sells was part of the team representing Spruill’s family. Here, he testifies to the International Commission of Inquiry on Systemic Racist Police Violence in 2021.

Derek Sells: He was stopped by police, questioned, he was frisked, and they found a small amount of narcotics on him. He was charged, arrested with criminal trespass and possession of some narcotics. And he was given an opportunity to get a reduced sentence and a favorable plea — if he would simply provide information about higher-level drug dealing that was going on.

Radley Balko: We should note here that this specific detail isn’t in the police report, but offering deals like this to low-level offenders was, and still is, common practice. Of course, it’s risky too.

Police are relying on people breaking the very laws they’re trying to enforce — whether they’re drug sellers looking to knock off competition, people in custody looking to cut a deal on their own charges, or drug users willing to do or say almost anything for money to feed their addiction.

Derek Sells: And so having missed six appointments, without explanation, he was deemed unreliable, and he was decertified as a police informant.

The police in the 28th Precinct, however, did not put this information into the system that would alert other police precincts that this individual was no longer certified confidential informant because he was deemed unreliable. So he instead went to another Manhattan-based precinct, the 25th Precinct, where they accepted him with open arms.

Radley Balko: Sells told the human rights commission that the informant in Spruill’s case had been decertified after failing to show up for scheduled meetings, but the NYPD report says his previous handlers told the 29th Precinct that he was credible.

Derek Sells: This information that he gave was that there was an individual named Melvin Boswell who was heavily armed and was a drug dealer, someone who was dealing drugs out of apartment 6F at 310 West 143rd Street.

Radley Balko: The police now had a name and address from an informant. At this point, they should have done more investigating to corroborate this information. They had Spruill’s name as the occupant of Apartment 6F, and could have done some research into who she was. They did not.

They could have done surveillance, but later explained that the building was just too busy to watch the apartment without raising suspicion. The next step, then, was to obtain the warrant.

Getting a warrant to forcibly enter a private residence should be a difficult process. Getting a warrant to break in without first knocking and announcing should be even tougher. Judges are supposed to scrutinize these warrant applications to protect the Fourth Amendment rights of people suspected of crimes.

But as Joel Berger says, that process is too often just a rubber stamp.

Joel Berger: When the police go to get a warrant, they submit an affidavit to a judge. Usually they go before the judge, and the judge asks questions, quite often very perfunctory questions. Occasionally, the informant is brought before the judge, although not always. Sometimes the police just by hearsay say, “Oh, he’s a good informant. We’ve used him and he’s been helpful in the past.”

They don’t provide any proof of that, and they’re not asked for any proof of that. Sometimes in lawsuits I’ve been able to get discovery about the actual reliability — or supposed reliability — of the informant. And often the discovery will show that he’s wrong like half the time, a third of the time.

Radley Balko: In this case, the informant claimed that the suspect, Boswell, who lived upstairs, dealt drugs out of Spruill’s apartment.

Here’s Spruill’s niece, Cynthia Howell again.

Cynthia Howell: They just went on a word of a drug addict informant. And the informant just said it’s that apartment.

Radley Balko: This was the police’s second mistake: bad information. But as attorney Christian Covington points out, it’s also one that should have been easy to correct.

Christian Covington: They like to make the issues seem that it was all due to this confidential informant given the wrong information, but that’s not the issue. The issue was that they’re supposed to substantiate the information and investigate the information, and they didn’t do anything. They just got the warrant and went in there and knocked down the door.

“The issue was that they’re supposed to substantiate the information and investigate the information, and they didn’t do anything.”

Radley Balko: If police had done basic surveillance of the apartment, or just asked around, they would have realized the apartment they were about to raid was the home of a church-going 57-year-old woman who had worked for the city for decades.

Here’s Police Commissioner Ray Kelly testifying before the city’s Committee on Public Safety about a month after the raid.

Raymond Kelly: Even after getting the warrant, there should have been a lot more observation of the location, see what trafficking was going on.

Radley Balko: If the cops had done that basic observation, they also would have noticed something important in the days before the raid. Their target, Melvin Boswell, hadn’t been coming or going from his own apartment. The reason why is almost comically unbelievable.

Here’s attorney Derek Sells.

Derek Sells: Had they done another simple check on Melvin Boswell, they had checked their own records — they would have learned that Melvin Boswell was in prison.

Radley Balko: Boswell had been arrested four days earlier by a different group of NYPD cops, at a different precinct.

After the break, the raid that killed Alberta Spruill.

[Break]

Radley Balko: The morning of the raid, a team of law enforcement officers gathered to discuss how it would all go down.

City Councilmember Phil Reed would later grill Police Commissioner Kelly on this critical moment, what happened next, and what should have happened.

Philip Reed: Who knew, who should have known that this Boswell country character had already been incarcerated? Was there anybody at this tactical meeting that had that information and that wasn’t shared?

Raymond Kelly: Yes.

Philip Reed: Who was that?

Raymond Kelly: Precinct personnel knew that.

Philip Reed: So they were at the tactical meeting before they broke down the woman’s door. They knew that Boswell had already been arrested, and they didn’t tell anybody?

Raymond Kelly: They didn’t communicate that to the emergency service personnel, that’s correct.

Philip Reed: At the tactical meeting just moments before they went in?

Raymond Kelly: That’s correct.

Philip Reed: So they knew the person they were looking for was in jail, but they didn’t tell anybody.

Raymond Kelly: That’s right.

Radley Balko: In case you missed that exchange: Someone at the raid planning meeting knew that the targeted drug dealer was already in jail — but didn’t tell the rest of the team. And without this crucial information, the police just went full steam ahead.

That brings us to the second major public debate Spruill’s death sparked: the use of no-knock raids to serve drug warrants.

Derek Sells: Most searches are required to be done with what’s called a “knock and announce,” which means that armed with a legal search warrant, police go to a home, and they knock on the door, and they announce their purpose.

In order to get a no-knock warrant, the police and prosecutors are required to show the additional proof that not only was there probable cause, but also that the individual whose place that they wanted to search presented a danger.

Radley Balko: The no-knock raid pops up in several episodes of this podcast series , because it’s a staple of the war on drugs. It’s also a tidy encapsulation of how the drug war prioritizes arresting and convicting suspected drug dealers, over the rights and safety of the people police are supposed to be serving and people who are disproportionately low-income and Black or Latino.

The no-knock raid encapsulates how the drug war prioritizes arresting suspected drug dealers over the rights and safety of the people police are supposed to be serving.

Joel Berger: Supposedly the excuse is that, in the case of drugs, they can be easily disposed of. Which is kind of interesting because, if it’s such a small quantity of drugs that they could be easily flushed down the toilet, why do they really need to use 20 officers to begin with? If it’s a major drug house, the culprits are not going to be able to flush everything down the toilet. So that knocks out the need for no-knock except in the most extreme circumstances.

Radley Balko: No-knock raids are supposed to be rare. They’re supposed to be reserved for only the most dangerous offenders. But under questioning by City Councilmember Frank Vallone, Commissioner Kelly conceded that no-knocks were the norm — much as they were in the rest of the country

Raymond Kelly: This is the total up to April 30. For 2001 through 2003, the total number of warrants are 12,950 warrants.

Frank Vallone: Out of those search warrants, how many were no-knock?

“I would say the vast majority are no-knock.”

Raymond Kelly: I would say the vast majority are no-knock. Most of the warrants are aimed at narcotics. The vast majority of the warrants are targeted at seizing narcotics. And as a general rule, narcotics can be destroyed or disposed of — at least that’s our belief — if you knock on the door and give notice of your appearance, so they’re endorsed for what we call a no-knock entry.

Radley Balko: When the police raided Alberta Spruill’s apartment, they had problems prying open her door. They finally forced their way in with a battering ram. But they also feared that the time they had lost put them at risk. So they set off a flash grenade.

In case you don’t know what those sound like, here’s a police demo .

APD SWAT officer: You guys give me a countdown from three, and on one, I’ll throw it, OK?

Children: Yes.

APD SWAT officer: Everybody plug your ears. Ready? Go ahead.

Children: Three, two, one.

[Explosive sound]

Radley Balko: Councilmember Gifford Miller questioned Kelly about flash-bang grenades.

Gifford Miller: What are the factors that causes the Department to decide to use them at all, and in what circumstances? And what are the factors that cause people to want to use them in particular circumstances?

Raymond Kelly: The purpose of it is to shock someone. There is usually a determination made that there are weapons at the scene, that there’s a possibility of those weapons used against police officers. So it’s a loud noise, it’s a flash. It certainly is shocking in nature, and the belief is that it would stop someone from using a weapon — or act as a diversion.

Let’s say you wanted someone to go to another location in the house. You might do that in the back of a house and then hit the front door, something like that, in a coordinated fashion. But there has been an increased use, and I think there was a belief on the part of officers that it protects them.

Gifford Miller: Have you done any analysis of that? Is there an analysis of the use of these devices that suggests that in these kinds of raids, there are less shootings or less injuries on the part of officers, or less injuries on the part of people who are raiding? Or have you done any kind of analysis that suggest their actual effectiveness?

Raymond Kelly: We haven’t …

Radley Balko: In 2008, the federal government criminally indicted a Georgia-based flash-bang grenade manufacturer. The suit alleged that the company’s grenades were prematurely detonating. One such incident had badly injured several FBI agents, who all experienced hearing loss. That indictment was eventually dropped. But even when they work correctly, police routinely blindly toss these devices into private homes.

By design, flash-bang grenades instill terror and shock in suspects who have often yet to even be charged with a crime. But they can also do quite a bit more damage than that. And of course, the grenade itself can’t distinguish suspects from innocent bystanders.

These devices have caused dozens of injuries and several deaths over the years. During a 2014 raid in Georgia, police threw a flash-bang that blew a hole in the chest of a 2-year-old boy. And of course, there are demographic patterns as to who gets targeted most.

Joel Berger: It’s almost always poor people, people of color, frequently people of the housing projects.

Radley Balko: Today, not a single state or Washington, D.C., track no-knock raids. The most recent data available comes from a 2014 ACLU survey of police departments around the country. That survey found that 42 percent of suspects targeted by no-knock raids were Black. Black people make up about 14 percent of the U.S. population.

Joel Berger: One of the excuses, even though it isn’t always articulated, is “We want to scare these people into making sure they don’t have anything more to do with the guy we’re looking for.” So it is very much a form of social control — just as stop-and-frisk was a form of social control. Saying, “OK, maybe you don’t have guns on you, but if you’re friends with anybody in a gang, you better keep away from them.”

“It is very much a form of social control — just as stop-and-frisk was a form of social control.”

They are designed to strike fear into the hearts of low-income people in neighborhoods where there’s a lot of drug traffic or guns. And as a result, they frequently wind up harming police community relations much more than they contribute to any solving of crimes.

Radley Balko: Spruill’s fate was determined by the race and profile of the people around her, and by police conceptions of who is and isn’t a criminal.

Derek Sells: When the police went into Ms. Spruill’s apartment, what they believed was that they were going to confront an African American, stereotypical, drug-dealing gunslinging male, and that’s what they went prepared to do. And so when Ms. Spruill happened to be there, she was treated as if she was part of his crew. And she was thrown to the ground, she was violently handcuffed even before they could figure out what really was going on. And so yes, even though the ultimate victim in this case was a 57-year-old African American woman, the target was a stereotypical individual who the police believed was a Black male gunslinging drug dealer.

Radley Balko: Police claimed they found Spruill on her bedroom floor.

Even when no one is physically injured, the trauma from a violent police raid can do lasting psychological damage.

Joel Berger: All of the victims almost all suffer from some form of PTSD, post-traumatic stress disorder. They tell me every time they hear, you know, a little bit of noise outside their door, they’re afraid the cops are coming back. It could just be a neighbor throwing out the garbage, but they don’t know that. They are extremely frightened. They’re frightened every time they hear sirens. Some of them say they’re frightened every time they see a police officer on the street.

Radley Balko: In Black, brown, and low-income neighborhoods across the United States, this fear of police, this alienation, has been set in place after decades of overzealous, violent actions by law enforcement. About a decade before Spruill’s death, for example, police in Boston mistakenly raided the home of the Rev. Accelyne Williams , also based on a bad tip from an informant. Like Spruill, the trauma of that raid sent Williams into cardiac arrest, which proved fatal. His death also sparked protests and demands for reform. New York City in 2003 was no different.

C. Virginia Fields: The community response in learning about Ms. Spruill’s death was again: How many more times do we have to go through this and no changes that are occurring?

Radley Balko: After Spruill’s death, both the city and community groups held public meetings about the police department’s tactics.

C. Virginia Fields: We had people, I think, from almost probably every borough, maybe not Staten Island, who came and talked about experiences they either had had or knew about this no-knock policy.

Mr. Rodrigues: About 3 o’clock in the morning, six cops break my door. I was sleeping when I heard the noise. They hit the door three times, and the door fell down. They grabbed me up and from my shirt, one gong on my head, one gong on my chest.

Bonnie Paley: I was almost killed by the New York City Police. The public housing precinct, [Police Service] number 8 in Throggs Neck, came after me at 9:30 in the morning. Twenty-five cops targeted me and targeted my then-19-year-old daughter.

Mary Barti: They stormed into the house, forced us to lay on the floor, hands out. My husband, who’s sitting here, my daughter and her little daughter, 2 years old, on the floor in the living room.

Radley Balko: These stories shocked a lot of people. But for the people who lived in these communities and who had been paying attention, they weren’t surprising. The local media had been reporting on similar botched raids for more than a decade . Journalists had been covering the failure of judges to properly scrutinize search warrants. They had covered the use of unreliable informants, and the resulting terror inflicted on innocent people and their families.

Members of the city’s Civilian Complaint Review Board, or CCRB, had expressed frustration that they lacked the authority to do much about any of this. The CCRB investigates complaints that New Yorkers file against police officers, and while it can recommend discipline when it finds wrongdoing, the final decision rests with the NYPD commissioner.

Here’s William Aquino, a CCRB investigator from 1998 to 2002.

William Aquino: In multiple cases, other investigators and I were ordered to exonerate officers who had not done sufficient investigation and went into innocent people’s homes.

Radley Balko: Narcotics search warrants surged in New York City during the 1990s. In 1994, NYPD executed about 1,400 warrants. That figure doubled by 1997. The majority of these were for no-knock raids. And civilian complaints about searches on the wrong apartment or wrong address climbed alongside this rise in raids.

In June of 2003 , Commissioner Kelly said out of 2,000 search warrants executed that year, just five had been on the wrong address. But Kelly couldn’t say for sure, because the NYPD just didn’t track how often it got the wrong address. This was common enough, however, that the agency had made maintenance workers available around the clock to fix the doors that police had mistakenly torn down.

The most chilling warning came from Norman Siegel, an attorney and former head of the New York Civil Liberties Union, who had filed a lawsuit on behalf of people had been wrongly raided. “We must do a better job of no-knock search warrants,” he said in a press conference. “Otherwise, someone might wind up dead as a result of how we implement this procedure.”

That was less than a year before the raid on Alberta Spruill.

Spruill’s death even inspired some criticism of the NYPD from members of its own force. Here’s a clip from a Democracy Now! interview with a young Black officer who would later go into politics.

[Democracy Now! theme music]

Amy Goodman: … A court had granted the police a no-knock warrant. It turns out the police raided the wrong apartment. We’re joined right now by Lt. Eric Adams. He’s founder and president of 100 Blacks in Law Enforcement Who Care. Welcome to Democracy Now!

Eric Adams: Thank you very much for having me this morning.

Amy Goodman: There’s been a lot of activity this weekend after what happened on Friday. Can you describe what you know at this point?

Eric Adams: Well, all things are still currently under investigation, and the police department has been very reluctant in turning over detail of, findings of what happened. What we do know is that it appears as though the wrong apartment was targeted.

Radley Balko: Almost 20 years later, former Lt. Eric Adams would become mayor of New York City. At the time, Spruill’s death provided a platform for his advocacy group and raised his public profile.

Joel Berger: A young Eric Adams trying to make a name for himself as head of 100 Blacks in Law Enforcement, being highly critical of the police department’s behavior — which now goes on today, continuously under his mayoralty.

Radley Balko: Here’s Adams speaking to the City Council’s Committee on Public Safety.

Eric Adams: If I could just quickly go through why this Spruill incident should not be identified as an isolated issue. Back in March 2002, the Queens Narcotics Unit entered a home of a Ms. Flornell out in Rockaway. The police commissioner responded to Rockaway, he met with the NAACP, he had a meeting with them, and he stated it was a tragedy. He would do all he can to ensure it does not happen again; he will have a comprehensive report. No report was done. The tragedies continue.

October 15 of that same year. Mr. Rogers and his wife, a retired police officer and retired captain, same thing. Police entered their homes. Mr. Rogers had his gun drawn. He was about to get into a fire-fight with the police officers until he saw they were cops. He hid his gun. He was handcuffed. His wife had heart trouble; she had to go to the hospital for several days. He spoke with the police commissioner, the police commissioner stated it was a tragedy, he was going to do all he could so that it doesn’t happen again, and a report would be done. Nothing was done.

Radley Balko: Spruill’s death did inspire some reforms, at least in the short term. Kelly ordered that flash grenades could only be used with a sign-off from a high-ranking NYPD official. The city required more corroboration of tips from informants, better documentation of their reliability, and better communication between precincts.

There were also promises for better training, and to create a database to track warrants, how they were served, and what the police found. And Berger says that, at least for a time, the procedures around when and how to conduct searches and raids did actually start to shift.

Joel Berger: For a few years, they were a little more careful because of all the negative publicity surrounding Spruill. I mean, of course, the percussion device was part of what scared her to death, and they haven’t used that very much since.

Radley Balko: Consequently, the number of overall raids dropped , from more than 5,000 warrants for drugs and guns per year to around 3,500. But even this lower figure was still 150 percent higher than just a decade earlier. It also didn’t take long for the bad habits to return.

Joel Berger: Everything else that they promised to do — checking out who really lives there, checking out whether the informant is reliable, checking out whether there’s been any information other than from the informant that would verify what the informant is saying — almost all of that has gone completely by the wayside over the past 20 years to the point where I have had numerous cases where totally innocent people had their apartments raided on no-knock warrants, and the police didn’t find anything at all.

And nonetheless they defended that, “Oh, well, you know, we had information,” and the city’s law department fights the cases tooth and nail, and in the end, you usually have to settle for less than it’s worth, and worse yet, the cops are never punished.

Radley Balko: Around the country, accountability is always the major sticking point in efforts to rein in police misconduct. New York City after Spruill was no exception. Members of the Civilian Complaint Review Board had tried for years to warn city officials about the out-of-control drug raids.

William Aquino: Unfortunately, Alberta Spruill is just the latest victim of a pattern of recklessness with search warrants and bench warrants that the NYPD and the Civilian Complaint Review Board have known about and tacitly encouraged for years.

Radley Balko: Former review board investigator William Aquino told the committee on public safety that when he had discovered wrongdoing, he was often pressured or forced to alter his official findings.

William Aquino: For example, a Brooklyn case where narcotics informant’s only description of the premises was that it was the door to the right of the stairs. When I went there I found two doors on the right, yet the officer simply guessed and sent ESU in with a grenade anyway. In circumstances remarkably similar to Ms. Spruill’s case, an older woman was handcuffed and kicked to the ground.

In another example, a Bronx case, in which a sergeant misrepresented a description of the house to a judge and CCRB, and misled his own supervisor into thinking that he had done the standard checks of utilities records. After I refused to comply with my manager’s demand that I change my report and exonerate the officer, the CCRB panel exonerated the search.

Radley Balko: Aquino, who served under mayors Rudy Giuliani and Michael Bloomberg, described how officers wouldn’t do the legwork to verify the addresses of warrant requests, and then would dodge accountability after the fact.

William Aquino: Officers and their union lawyers invariably insist that everything is legal once a cop is holding a warrant, as if questionable information magically becomes gospel once you sell a too-trusting judge on it. To them, once a judge signs off or issues a bench warrant, the police are absolved of all responsibility, even if they know that their information is actually thinner than the paper the warrant is printed on. End of story.

“To them, once a judge signs off or issues a bench warrant, the police are absolved of all responsibility.”

Radley Balko: In other words, even if the police lied to get a warrant, once that warrant was signed by a judge, it became legal. This made holding the police, or individual officers, accountable virtually impossible. Ultimately, efforts to empower the board to scrutinize NYPD narcotics policy, and the investigations that led to these warrants proved futile.

Joel Berger: There have been deaths, and there’re gonna be more deaths.

Radley Balko: Berger continues to represent victims of police abuse but he says that even when he wins on paper, it’s just part of an endless cycle: Police terrorize innocent people, the city pays out a settlement, and then nothing changes. And then it all happens again.

Joel Berger: There are no consequences for the police officers who do these things. I mean, I bring lawsuits. I get compensation for the victims. Not only are the lawsuits ineffectual, but the city deliberately slows them down and fights tooth and nail against even getting some compensation for people.

The city spends millions of dollars a year settling these cases or paying out judgments. This all comes out of the taxpayer’s money, and nothing is done to the officers. Or at most, even in the most extreme cases, all that’s likely to happen is the officer gets a slap on the wrist. Maybe 10 days’ vacation time is taken away from him, sometimes not even that.

So the lawsuits are unfortunately ineffective in bringing about genuine change. That is one of the most frustrating things in what I’ve been doing for a living, having to explain that to people. I have had cases where when I hand over the settlement check to the client; the client breaks down in tears saying it’s not enough. It’s just not good enough. It’ll never really be enough.

Radley Balko: The city of New York eventually paid Alberta Spruill’s family $1.6 million. But the raids continued.

Cynthia Howell: You know, they’ll hand out a settlement, a settlement, a settlement. That doesn’t settle the fact that if you don’t change your policing policies, those settlements don’t mean nothing.

Radley Balko: Spruill’s niece, Cynthia Howell, often mentioned that hers was the rare family to receive an apology from the mayor. Mayor Michael Bloomberg also named a daily bus run after Spruill. It’s the 6:52 a.m. bus on the M1 line. It’s the bus Spruill was preparing to take to her city job on the morning she was killed, as she had every day for 29 years. But that symbolic gesture hardly seems sufficient.

Joel Berger: The NYPD is an incredibly powerful agency, and it exercises its power vociferously. It gives into a more, even more vociferous union, which gets altogether too much attention. City Hall, even under better mayors than the one we have now, has been afraid to go up against the NYPD . Even the City Council has been reluctant to really clamp down. The state legislature has been reluctant to clamp down and only did so a little bit in the wake of George Floyd , only to the extent of making police disciplinary records more accessible.

The city comptroller’s office continues to settle cases all the time without requiring that anything be done to the police officer. The DAs keep records on officers who they believe are not credible, but does not prosecute them for lying in specific cases. There are so many different agencies that all contribute to this.

C. Virginia Fields: I believe in community police relationships. I am not one to talk about defunding the police. To me, that’s not the answer, but I do know that I expect and demand police to come into communities and be respectful, to not mistreat people.

Radley Balko: C. Virginia Fields isn’t in government anymore. She says the failure of Spruill’s death to bring real change left her discouraged about the possibility of fixing the system.

C. Virginia Fields: Unfortunately, we don’t even hear about change, or we don’t talk about change, until an incident comes up. Then we all get very busy, we’ve got to do something, and that lasts for a short period of time. There is not the intentional, purposeful, continuation of working on these issues to follow them through at the level, the top, where we need to be making the changes.

Radley Balko: In 2003, Alberta Spruill joined the long and ever-growing list of innocent people killed in drug raids. Each of those deaths added new voices to the movement for reform.

In the years after her aunt’s death, Cynthia Howell helped found a group called Families United 4 Justice, along with the uncle of Oscar Grant — the man shot and killed by a police officer while lying face-down in an Oakland subway station.

Cynthia Howell: What we are caring about is accountability. We are caring about justice. And none of these families, not even my own, has received the justice.

[“Say Her Name” song by Janelle Monae plays]

Alberta Spruill, say her name!
Alberta Spruill, say her name!
Alberta Spruill, say her name!

Cynthia Howell: A fight ain’t a fight unless you fight, and we have no choice but to fight. We have been thrust into this by circumstances.

Radley Balko: Next time on Collateral Damage.

Bills Aylesworth: They cooked up a scheme, a story, that he was growing marijuana on the property.

Richard Dewitt: Captain Dewitt here. I’m on a search warrant with the Hidden Hills crew on this marijuana eradication thing.

Bills Aylesworth: And raided his house.

Dan Alban: They were just looking for an excuse to invade his ranch, search everything, and find some basis for the seizure.

Radley Balko: Collateral Damage is a production of The Intercept.

It was reported and written by me, Radley Balko.

Additional writing by Andrew Stelzer, who also served as producer and editor.

Laura Flynn is our showrunner.

Ben Muessig is our editor-in-chief.

The executive producers are me and Sumi Aggarwal.

We had editing support from Maryam Saleh.

Truc Nguyen mixed our show.

Legal review by Shawn Musgrave and David Bralow.

Fact-checking by Kadal Jesuthasan.

Art direction by Fei Liu.

Illustrations by Tara Anand.

Copy editing by Nara Shin.

Social and video media by Chelsey B. Coombs.

Special thanks to Peter Beck for research assistance.

Thank you to the WNYC archive for audio from Alberta Spruill’s funeral service and from the Harlem Interfaith Group on Police Brutality . We also want to thank the International Commission of Inquiry on Systemic Racist Police Violence Against People of African Descent in the United States for audio from the Hearing on the case of Alberta Spruill .”

This series was made possible by a grant from the Vital Projects Fund.

If you want to send us a message, email us at podcasts@theintercept.com

To continue to follow my work and reporting, check out my newsletter, The Watch, at radleybalko.substack.com .

Thank you for listening.

Cloudflare blames this week's massive outage on database issues

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 10:54:54
On Tuesday, Cloudflare experienced its worst outage in 6 years, blocking access to many websites and online platforms for almost 6 hours after a change to database access controls triggered a cascading failure across its Global Network. [...]...
Original Article

Cloudflare

On Tuesday, Cloudflare experienced its worst outage in 6 years, blocking access to many websites and online platforms for almost 6 hours after a change to database access controls triggered a cascading failure across its Global Network.

The company's Global Network is a distributed infrastructure of servers and data centers across more than 120 countries, providing content delivery, security, and performance optimization services and connecting Cloudflare to over 13,000 networks, including every major ISP, cloud provider, and enterprise worldwide.

Matthew Prince, the company's CEO, said in a post-mortem published after the outage was mitigated that the service disruptions were not caused by a cyberattack.

Wiz

"The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a "feature file" used by our Bot Management system," Prince said .

The outage began at 11:28 UTC when a routine database permissions update caused Cloudflare's Bot Management system to generate an oversized configuration file containing duplicate entries. The file, which exceeded the built-in size limits, caused the software to crash while routing traffic across Cloudflare's network.

This database query returned duplicate column metadata after permissions changes, doubling the feature file from approximately 60 features to over 200, exceeding the system's hardcoded 200-feature limit designed to prevent unbounded memory consumption.

Cloudflare 5xx error HTTP status codes
5xx error HTTP status codes during outage (Cloudflare)

​Every five minutes, a query generated either correct or faulty configuration files, depending on which cluster nodes had been updated, causing the network to fluctuate between working and failing states.

Additionally, when the oversized file propagated across network machines, the Bot Management module's Rust code triggered a system panic and 5xx errors, crashing the core proxy system that handles traffic processing.

Core traffic returned to normal by 14:30 UTC after Cloudflare engineers identified the root cause and replaced the problematic file with an earlier version. All systems were fully operational by 17:06 UTC. The outage affected Cloudflare's core CDN and security services, Turnstile, Workers KV, dashboard access, email security, and access authentication.

"We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable," Prince added.

"Today was Cloudflare's worst outage since 2019. We've had outages that have made our dashboard unavailable. Some that have caused newer features to not be available for a period of time. But in the last 6+ years we've not had another outage that has caused the majority of core traffic to stop flowing through our network."

Cloudflare mitigated another massive outage in June, which caused Zero Trust WARP connectivity issues and Access authentication failures across multiple regions, and also impacted Google Cloud infrastructure.

In October, Amazon also addressed an outage triggered by a major DNS failure that disrupted connectivity to millions of websites using its Amazon Web Services (AWS) cloud computing platform.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

What Killed Perl?

Hacker News
entropicthoughts.com
2025-11-19 10:25:25
Comments...
Original Article

Trick question! Perl is not dead. I’ll show you what I mean, and then still answer what I think killed Perl.

cpanreleasers.png

The cpan Report 2023 put together by Neil Bowers quite clearly illustrates that Perl’s popularity is somewhere in the same range it was during the dotcom bubble. 1 I realise cpan usage isn’t a perfect proxy. There are probably a lot of people like me who use Perl specifically for things where they don’t need to interact with third-party libraries. These wouldn’t show up in the cpan records either, obviously. But it’s the best proxy I have. If anything, it’s higher: popularity increased ever so slightly after 2022, as next year’s cpan report will show. (Neil says he will publish it in January, so follow his blog for the latest news.) But it is also clear that newcomers make up a decreasing portion of the Perl crowd, and this has been the case since 2011. Why is that?

Some people seem to think Raku (formerly known as “Perl 6”) sucked momentum out of Perl, but I don’t believe that. Everyone I talked to back then knew Perl wasn’t going anywhere. Humanity had chained too much of the infrastructure of the growing internet to it. Even if Raku turned out to be a wild success, someone would have to keep maintaining Perl for many years to come. There was never any danger of obsolescence in starting a new project in Perl.

Besides, Raku was first announced in 2000, and the major burst of activity around Raku implementations seems to have been at the end of that decade. Through that period, Perl grew rapidly, as indicated by the graph.

I still struggle to understand why Perl went out of favour, which is understandable if you know what I think about it . But I have heard two reasons that resonate with me.

  1. The people who grew up on Unixy systems in the 1990s and early 2000s would know shell, C, awk , sed, Vim, etc. To these people, Perl is a natural extension of what they were already doing. Then in the 2000s came a new generation of programmers brought up on … I don’t know, Microsoft systems, Visual Basic and Java? These people were more attracted to something like Python as a second language, which then became popular enough to become the first language of the generation after that.
  2. Back when people learned Perl, you didn’t just go online and download development tools for a programming language on a whim. Binary package managers that chase down dependencies on their own weren’t a thing until the early 2000s, I think? And even then they didn’t have all that many packages. So even if, I don’t know, Oberon or Eiffel would be a better fit for someone in the 1990s, they might have opted to go with Perl anyway because that was what they had. These days, this is not as much of a problem anymore. 2 You’ll find that the invention of many of the popular languages of today, such as Rust, Kotlin, Elixir, TypeScript, and Go happen to coincide with the growth of the internet and increased power of package managers.

So to state my hypothesis briefly: people today are both less predisposed to understand Perl, and have easy access to so many other alternatives. It’s a rather unsatisfactory explanation, but it’s the closest I can get.

The $1k AWS Mistake

Hacker News
www.geocod.io
2025-11-19 10:00:05
Comments...
Original Article

A cautionary tale about AWS VPC networking, NAT Gateways, and how a missing VPC Endpoint turned our S3 data transfers into an expensive lesson.

I've been using AWS since around 2007. Back then, EC2 storage was entirely ephemeral and stopping an instance meant losing all your data. The platform has come a long way since then.

Even after nearly two decades with the platform, there's always something new to learn. And sometimes those lessons come with a $1,000 price tag.

The setup

We recently moved over to using S3 for mirroring some large internal data files for Geocodio. We're talking about geographic datasets (things like address points, boundary data, and census information) that range from a few gigabytes to hundreds of gigabytes each. Some of these files are updated almost daily with fresh data, while others are refreshed less frequently. They need to be synced regularly from our ETL platform (which is hosted with Hetzner) to our processing infrastructure on AWS.

AWS has notoriously high data transfer costs. Cloudflare has written extensively about this , and it's a common complaint across the industry. Corey Quinn from Last Week in AWS has also called out the AWS Managed NAT Gateway for being particularly expensive. AWS charges $0.09 per GB for data transfer out to the internet from most regions, which adds up fast when you're moving terabytes of data.

So before starting this project, I did my homework. I carefully researched the costs involved and confirmed two critical things:

  1. AWS still allows free transfer between EC2 instances and S3 (as long as they're in the same region)
  2. Transfers into S3 are free (this was important since the data comes from our ETL platform hosted with Hetzner)

Great! I had a clear picture of the costs.

...Or so I thought.

The surprise

A few days after deploying the new S3 sync process, I got a notification from AWS Cost Anomaly Detection. (Boy, was I happy that I had that enabled!)

The alert showed something alarming: 20,167.32 GB of "NAT Gateway" data transfers in a single day, which amounted to $907.53 .

Month to date, this had already surpassed $1,000.

AWS Cost Anomaly Detection showing unexpected NAT Gateway charges

I stared at the dashboard in disbelief. How could this be happening? I had specifically confirmed that EC2-to-S3 transfers were free!

But why oh why?

After some frantic investigating (and a bit of panic), I discovered the culprit.

When you're using VPCs with a NAT Gateway (which most production AWS setups do), S3 transfers still go through the NAT Gateway by default. Even though you're making requests to an AWS service that's in the same region, the traffic is routed out through your NAT Gateway and back in, incurring data transfer charges at $0.045 per GB.

Architecture diagram showing traffic flowing from EC2 through NAT Gateway to S3

The solution? VPC Endpoints for S3 , specifically what AWS calls a "Gateway Endpoint."

A Gateway Endpoint is a special type of VPC endpoint that allows you to privately route traffic to S3 without going through your NAT Gateway or Internet Gateway. It's essentially a direct pipe from your VPC to S3.

Even better, Gateway Endpoints for S3 are completely free . No hourly charges, no data transfer charges. Nothing.

The fix

The solution is to create a VPC Gateway Endpoint for S3. This is a special type of VPC endpoint that creates a direct route from your VPC to S3, bypassing the NAT Gateway entirely.

Architecture diagram showing direct traffic flow from EC2 through VPC Gateway Endpoint to S3

In our case, we manage infrastructure with Terraform, so it was just a matter of adding the Gateway Endpoint resource and associating it with our route tables. AWS automatically handles the routing updates to direct S3 traffic through the endpoint instead of the NAT Gateway.

The lesson

I've built countless VPCs, configured security groups, set up load balancers, and optimized costs in dozens of ways over the years. But somehow, VPC Endpoints for S3 had slipped through the cracks of my knowledge.

AWS's networking can be deceptively complex. Even when you think you've done your research and confirmed the costs, there are layers of configuration that can dramatically change your bill.

Don't make my mistake. Here are a few things I'd suggest checking to help you avoid your own surprise $1,000 bill:

AWS Cost Anomaly Detection is worth setting up. It caught this issue within days, saving us from an even larger surprise at the end of the month. If you haven't enabled it yet, do it now .

VPC Endpoints are your friend. If you're using S3 or DynamoDB from EC2 instances in a VPC with a NAT Gateway, you absolutely need Gateway Endpoints. There's literally no reason not to use them. They're free and improve performance.

Always validate your assumptions. I thought "EC2 to S3 is free" was enough. I should have tested with a small amount of data and monitored the costs before scaling up to terabytes.

The cloud is complicated. There's always more to learn, even after nearly two decades. And that's okay. It just means we need to be careful and vigilant.

And we're not alone in this. Just last year, Recall.ai discovered they were paying $1M annually in unexpected AWS WebSocket data processing fees . Even experienced teams hit these surprises.

What's next

We've since audited our entire AWS infrastructure to make sure we have Gateway Endpoints configured for all VPCs that communicate with S3.

If you're using AWS and you haven't checked your VPC Endpoint configuration lately, I'd recommend taking a look. That $1,000 lesson doesn't need to be repeated.

TL;DR: NAT Gateways charge for ALL data processing, even for traffic to AWS services like S3 that have no data transfer fees. Use VPC Endpoints to bypass this.

Additional Resources

How California Spent Natural Disaster Funds to Quell Student Protests for Palestine

Intercept
theintercept.com
2025-11-19 10:00:00
California’s Law Enforcement Mutual Aid fund has been used to fight fires, floods, earthquakes — and Gaza demonstrations. The post How California Spent Natural Disaster Funds to Quell Student Protests for Palestine appeared first on The Intercept....
Original Article

Cal Poly Humboldt students had been occupying a campus building in solidarity with Palestine for three days when then-university President Tom Jackson decided to bring the demonstration to an end. But he didn’t think the university could break the occupation, some two dozen members strong, on its own. In an email to the sheriff of the Humboldt Police Department on April 25, 2024, Jackson asked to tap a pool of policing cash clothed in the language of anarchist solidarity: the “law enforcement mutual aid system.”

In California, the Law Enforcement Mutual Aid Fund sets aside $25 million annually to let law enforcement agencies work across jurisdictions to fight natural disasters and other major emergencies. In a briefing obtained by The Intercept, acceptable LEMA use cases are listed as fires, storms, flooding, earthquakes, natural or man-made disasters, and “other extra ordinary events requiring emergency law enforcement mutual aid on a case by case basis.”

Leadership at California State Polytechnic University, Humboldt — part of the California State University public school system — was able to tap these funds to bring outside law enforcement onto campus, The Intercept found in an investigative series on the university playbook for crushing pro-Palestine protests. Among more than 20,000 pages of documentation The Intercept obtained via public records requests, email after email from April and May 2024 show chiefs of police and administrators in California’s public universities asking outside law enforcement agencies to enter their campuses and clear encampments.

As “Gaza solidarity” encampments popped up across college campuses in April and May 2024, Jodi Lopez, staff services manager at California’s Office of Emergency Services, informed the leadership of at least 30 public universities — including Cal Poly Humboldt — that if they were to require mutual aid assistance, LEMA would be available to reimburse their expenses, attaching a flyer that detailed eligible costs.

Cal Poly Humboldt students first entered and staged a peaceful sit-in at Siemens Hall on April 22. According to the documents obtained by The Intercept, leadership at the university was promptly in contact with local police departments about bringing the demonstration to an end. That day, police in riot gear attempted to enter the building and clear out the protesters, but students held them off. In an incident that would go viral on social media, a student could be seen on surveillance footage hitting officers on their helmets with an empty plastic water jug. The cops eventually withdrew from the building, marking the start of what would turn into an eight-day occupation.

Enlisting the help of Humboldt County’s Office of Emergency Services, the Eureka Police Department, and the University of California Police Department, Jackson’s email on April 25 requested assistance with “Reestablish[ing] control of university buildings and other property” and “eliminating the threat of domestic violent extremism and criminal behavior” on the part of the students — setting into motion the plan with which the cops ultimately cleared the hall. Ryan Derby, then head of the county OES, added in his mutual aid request that Cal Poly Humboldt would require the assistance of a total of 250 law enforcement officers, with “personnel for entry team trained in tactical room clearing and arrest and control.”

In a statement emailed to The Intercept, Cal Poly Humboldt spokesperson Aileen S. Yoo confirmed that the university “formally requested from the state Law Enforcement Officer support through the LEMA request process” and noted that “Cal Poly Humboldt remains firmly committed to upholding the rights guaranteed under the First Amendment, ensuring that all members of our community can speak, assemble, and express their views.”

A Cal OES spokesperson confirmed in a statement to The Intercept that “Local law enforcement who provided that support to Cal Poly Humboldt were reimbursed through the LEMA Fund program.” The statewide office “is committed to protecting Californians and supporting local partners in times of crisis, regardless of political views or affiliation,” the spokesperson wrote.

If there were ever a social contract between students and administrators at U.S. universities that allowed for the operation of insulated, on-campus police departments thought to be better attuned to the needs of students, that contract was shattered when universities nationwide brought in outside law enforcement to crush the student-led movement for Palestine, argued civil liberties advocates who spoke with The Intercept. A year before the Trump administration would step up efforts to use police power against public protest , the Palestine solidarity encampments made universities a test case for the tolerance of dissent — one that universities overwhelmingly failed.

“ I don’t even know if we can talk about the trust that students have in their universities. But if there was any trust, you ruin it when you bring in outside police to harm your own students,” said Sabiya Ahamed, a staff attorney at Palestine Legal.

“If campus closure is required through the weekend, revenue loss will grow considerably.”

As Jackson stated in his email, Cal Poly Humboldt’s budget was at stake. “Three large events and a dozen smaller events on campus have been canceled. Athletic events have been either canceled or moved off main campus,” he wrote. “If campus closure is required through the weekend, revenue loss will grow considerably.”

University and outside law enforcement would go on to arrest 25 students at Siemens Hall. Alongside over a dozen wildfires — including the deadly Palisades Fire , which destroyed more than 6,000 homes — the raid is currently listed on the LEMA website as an example of a case for which funding can be requested.

While it is far from a secret that outside law enforcement agencies were involved in the clearing of university pro-Palestine encampments, these terms of operation — and compensation — have never previously been reported on in detail. Communications between university officials and the outside agencies show that the process took shape in the smooth functioning of bureaucracy, with polite, breezy exchanges preceding violent crackdowns and raids.

As the pro-Palestine demonstrations continued, the practice of bringing outside law enforcement officers onto campus became increasingly normalized in the University of California system. On May 5, 2024, Lamine Secka, chief of police at UC San Diego, wrote to the California Highway Patrol: “Attached, please find a request for assistance to clear out a protest encampment on the UC San Diego campus.” CHP, acting with UCSD and the San Diego County Sheriff’s Department, would enter the campus in full riot gear on May 6, arresting dozens of student protesters. (It was not clear if LEMA funds covered that deployment, and UCSD did not respond to The Intercept’s request for comment.)

The presence of outside law enforcement officers on campus fundamentally alters the power dynamics of a protest, said Ahamed of Palestine Legal. “ These police officers who are trained in violent tactics, you bring them to campus and they’re deploying those tactics against students. That is really dangerous,” she said.

In some cases, that meant radicalizing students who watched militarized police forces haul their classmates away. In others, it meant injuring peaceful protesters — especially at the University of California Los Angeles, according to students and faculty who spoke with The Intercept. At UCLA, university administrators tapped state emergency services funds to bring in outside law enforcement officers and arrest countless students, with many injured. UCLA did not respond to The Intercept’s request for comment.

“They were showing us the level of militarization within these departments,” Dylan Kupsh, a fifth-year Ph.D. student at UCLA, told The Intercept. “Even since the encampment, they’ve been more and more present and bringing in other departments.”

In the face of this repression, said Corey Saylor, the research and advocacy director at Council on American-Islamic Relations, “This generation of college students is extraordinarily brave and principled. They’ve been willing to sacrifice education and career to stand on a very simple human value that genocide is wrong, that occupation is wrong, that apartheid is wrong.”

The pro-Palestine encampments presented university leaders with a publicity crisis, forcing them to choose between options ranging from letting the peaceful protests play out to quashing them with the full force of the police. Universities almost exclusively chose the latter. With encouragement from the state government, California public universities responded to the student protests less like dissent and more like a natural disaster.

Research support provided by the nonprofit newsroom Type Investigations.

‘PlushDaemon’ hackers hijack software updates in supply-chain attacks

Bleeping Computer
www.bleepingcomputer.com
2025-11-19 10:00:00
The China-aligned advanced persistent threat (APT) tracked as 'PlushDaemon' is hijacking software update traffic to deliver malicious payloads to its targets. [...]...
Original Article

China

A China-linked threat actor tracked as 'PlushDaemon' is hijacking software update traffic using a new implant called EdgeStepper in cyberespionage operations.

Since 2018, PlushDaemon hackers have targeted individuals and organizations in the United States, China, Taiwan, Hong Kong, South Korea, and New Zealand with custom malware, such as the SlowStepper backdoor.

PlushDaemon has compromised electronics manufacturers, universities, and a Japanese automotive manufacturing plant in Cambodia. Telemetry data from cybersecurity firm ESET indicates that since 2019, the threat actor has relied on malicious updates to breach target networks.

Wiz

PlushDaemon victims since 2023
PlushDaemon victims since 2023
Source: ESET

Attack chain

The attackers gain access to routers by exploiting known vulnerabilities or weak admin passwords, install the EdgeStepper implant, and then redirect software-update traffic to their own infrastructure.

Developed in Golang and compiled as an ELF binary, EdgeStepper works by intercepting DNS queries and redirecting them to a malicious DNS node after confirming that the domain is employed for delivering software updates, ESET researchers explain in a report shared with BleepingComputer.

When a victim tries to update their software, they receive the first-stage malware downloader for Windows called LittleDaemon, which is disguised as a DLL file named ‘ popup_4.2.0.2246.dll.

First attack stages
Overview of the attack
Source: ESET

LittleDaemon establishes communication with the attacker's hijacking node and fetches a second malware dropper named DaemonicLogistics, which is decrypted and executed in memory.

In the next stage of the attack, the hackers use DaemonicLogistics to retrieve their signature backdoor, SlowStepper.

The backdoor has been previously documented in attacks against users of the South Korean VPN product IPany. During those attacks, users downloaded a trojanized installer from the vendor’s official website.

The SlowStepper malware enables hackers to collect detailed system information, execute extensive file operations, run commands, and use various Python-based spyware tools that can steal data from the browser, intercept keystrokes, and collect credentials.

ESET researchers note that the PlushDaemon's adversary-in-the-middle capabilities are strong enough "to compromise targets anywhere in the world."

The report published today includes technical details for all the newly uncovered malware as well as a set of indicators of compromise for files, IP addresses, and domains that PlushDaemon used in attacks that deeployed the EdgeStepper network implant.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Multimodal Diffusion Language Models for Thinking-Aware Editing and Generation

Hacker News
github.com
2025-11-19 09:27:17
Comments...
Original Article

MMaDA-Parallel: Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation

MMaDA-Parallel Paper on arXiv MMaDA Parallel Page

MMaDA on Hugging Face MMaDA on Hugging Face MMaDA on Hugging Face

🌌 Introduction

Parallel Generation Demo

Demo: Parallel text-image generation in action.

While thinking-aware generation aims to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can paradoxically degrade performance due to error propagation. To systematically analyze this issue, we propose ParaBench , a new benchmark designed to evaluate both text and image output modalities. Our analysis using ParaBench reveals that this performance degradation is strongly correlated with poor alignment between the generated reasoning and the final image. To resolve this, we propose a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory. This model, MMaDA-Parallel , is trained with supervised finetuning and then further optimized by Parallel Reinforcement Learning ( ParaRL ), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency. Experiments validate that our approach significantly improves cross-modal alignment and semantic consistency, achieving a 6.9% improvement in Output Alignment on ParaBench compared to the state-of-the-art model, Bagel, establishing a more robust paradigm for thinking-aware image synthesis.

Architecture of MMaDA-Parallel. During Training, image and text responses are masked and predicted in parallel with a uniform mask predictor. During Sampling, the model performs parallel decoding to generate both image and text responses jointly, enabling continuous cross-modal interaction.

Results

Main Results

Qualitative comparison.

Main Results

Quantitative Results on ParaBench.

📰 Latest Updates

⚙️ Quick Start

Note: Our model has been successfully validated on synthetic datasets focusing on environments, still life, architecture, and natural landscapes. Its performance on out-of-distribution inputs—such as human faces or real-world photographic imagery—has not yet been fully explored. We are actively expanding our training corpus to include more diverse datasets.

1. Environment Setup

First, start with a torch environment with torch 2.3.1 or higher version, then install the following dependencies:

pip install -r requirements.txt

We provide two varients of MMaDA-Parallel with different tokenizers. MMaDA-Parallel-A is trained with tokenizer Amused-VQ, and MMaDA-Parallel-M is trained with tokenizer Magvitv2.

2. Experiencing Parallel Gen with MMaDA-Parallel-A

You can directly use the local gradio app to experience the parallel generation with MMaDA-Parallel-A:

Or you can use the inference script to generate the parallel generation results:

cd MMaDA-Parallel-A
python inference.py \
    --checkpoint tyfeld/MMaDA-Parallel-A \
    --vae_ckpt tyfeld/MMaDA-Parallel-A \
    --prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
    --image_path examples/image.png \
    --height 512 \
    --width 512 \
    --timesteps 64 \
    --text_steps 128 \
    --text_gen_length 256 \
    --text_block_length 32 \
    --cfg_scale 0 \
    --cfg_img 4.0 \
    --temperature 1.0 \
    --text_temperature 0 \
    --seed 42 \
    --output_dir output/results_interleave

3. Parallel Gen with MMaDA-Parallel-M

cd MMaDA-Parallel-M
python inference.py interleave_root=./interleave_validation  

TODO

  • Release the MMaDA-Parallel code and paper.
  • Evaluation on ParaBench code.
  • Refine MMaDA-Parallel-M and update the corresponding checkpoint.
  • Training code for SFT and ParaRL.

📖 Citation

@article{tian2025mmadaparallel,
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
  author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
  journal={arXiv preprint arXiv:2511.09611},
  year={2025}
}

🤝 Acknowledgments

This work is heavily based on MMaDA and Lumina-DiMOO . Thanks to all the authors for their great work.

Static Web Hosting on the Intel N150: FreeBSD, SmartOS, NetBSD, OpenBSD and Linux Compared

Lobsters
it-notes.dragas.net
2025-11-19 09:10:07
Comments...
Original Article

I often get very specific infrastructure requests from clients. Most of the time it is some form of hosting. My job is usually to suggest and implement the setup that fits their goals, skills and long term plans.

If there are competent technicians on the other side, and they are willing to learn or already comfortable with Unix style systems, my first choices are usually one of the BSDs or an illumos distribution. If they need a control panel, or they already have a lot of experience with a particular stack that will clearly help them, I will happily use Linux and it usually delivers solid, reliable results.

Every now and then someone asks the question I like the least:

“But how does it perform compared to X or Y?”

I have never been a big fan of benchmarks. At best they capture a very specific workload on a very specific setup. They are almost never a perfect reflection of what will happen in the real world.

For example, I discovered that idle bhyve VMs seem to use fewer resources when the host is illumos than when the host is FreeBSD. It looks strange at first sight, but the illumos people are clearly working very hard on this, and the result is a very capable and efficient platform.

Despite my skepticism, from time to time I enjoy running some comparative tests. I already did it with Proxmox KVM versus FreeBSD bhyve , and I also compared Jails, Zones, bhyve and KVM on the same Intel N150 box. That led to the FreeBSD vs SmartOS article where I focused on CPU and memory performance on this small mini PC.

This time I wanted to do something simpler, but also closer to what I see every day: static web hosting.

Instead of synthetic CPU or I/O tests, I wanted to measure how different operating systems behave when they serve a small static site with nginx, both over HTTP and HTTPS.

This is not meant to be a super rigorous benchmark. I used the default nginx packages, almost default configuration, and did not tune any OS specific kernel settings. In my experience, careful tuning of kernel and network parameters can easily move numbers by several tens of percentage points. The problem is that very few people actually spend time chasing such optimizations. Much more often, once a limit is reached, someone yells “we need mooooar powaaaar” while the real fix would be to tune the existing stack a bit.

So the question I want to answer here is more modest and more practical:

With default nginx and a small static site, how much does the choice of host OS really matter on this Intel N150 mini PC?

Spoiler : less than people think, at least for plain HTTP. Things get more interesting once TLS enters the picture.


Disclaimer
These benchmarks are a snapshot of my specific hardware, network and configuration. They are useful to compare relative behavior on this setup. They are not a universal ranking of operating systems. Different CPUs, NICs, crypto extensions, kernel versions or nginx builds can completely change the picture.


Test setup

The hardware is the same Intel N150 mini PC I used in my previous tests: a small, low power box that still has enough cores to be interesting for lab and small production workloads.

On it, I installed several operating systems and environments, always on the bare metal, not nested inside each other. On each OS I installed nginx from the official packages.

Software under test

On the host:

SmartOS , with:
- a Debian 12 LX zone
- an Alpine Linux 3.22 LX zone
- a native SmartOS zone

FreeBSD 14.3-RELEASE:
- nginx running inside a native jail

OpenBSD 7.8:
- nginx on the host

NetBSD 10.1:
- nginx on the host

Debian 13.2:
- nginx on the host

Alpine Linux 3.22:
- nginx on the host

I also tried to include DragonFlyBSD , but the NIC in this box is not supported. Using a different NIC just for one OS would have made the comparison meaningless, so I excluded it.

nginx configuration

In all environments:

  • nginx was installed from the system packages
  • worker_processes was set to auto
  • the web root contained the same static content

The important part is that I used exactly the same nginx.conf file for all operating systems and all combinations in this article . I copied the same configuration file verbatim to every host, jail and zone. The only changes were the IP address and file paths where needed, for example for the TLS certificate and key.

The static content was a default build of the example site generated by BSSG , my Bash static site generator . The web root was the same logical structure on every OS and container type.

There is no OS specific tuning in the configuration and no kernel level tweaks. This is very close to a “package install plus minimal config” situation.

TLS configuration

For HTTPS I used a very simple configuration, identical on every host.

Self signed certificate created with:

openssl req -x509 -newkey rsa:4096 -nodes -keyout server.key -out server.crt -days 365 -subj "/CN=localhost"  

Example nginx server block for HTTPS (simplified):

server {  
listen 443 ssl http2;  
listen [::]:443 ssl http2;  

server_name _;  

ssl_certificate /etc/nginx/ssl/server.crt;  
ssl_certificate_key /etc/nginx/ssl/server.key;  

root /var/www/html;  
index index.html index.htm;  

location / {  
try_files $uri $uri/ =404;  
}  
}  

The HTTP virtual host is also the same everywhere, with the root pointing to the BSSG example site.

Load generator

The tests were run from my workstation on the same LAN:

  • client host: a mini PC machine connected at 2.5 Gbit/s
  • switch: 2.5 Gbit/s
  • test tool: wrk

For each target host I ran:

  • wrk -t4 -c50 -d10s http://IP
  • wrk -t4 -c10 -d10s http://IP
  • wrk -t4 -c50 -d10s https://IP
  • wrk -t4 -c10 -d10s https://IP

Each scenario was executed multiple times to reduce noise; the numbers below are medians (or very close to them) from the runs.

The contenders

To keep things readable, I will refer to each setup as follows:

  • SmartOS Debian LX → SmartOS host, Debian 12 LX zone
  • SmartOS Alpine LX → SmartOS host, Alpine 3.22 LX zone
  • SmartOS Native → SmartOS host, native zone
  • FreeBSD Jail → FreeBSD 14.3-RELEASE, nginx in a jail
  • OpenBSD Host → OpenBSD 7.8, nginx on the host
  • NetBSD Host → NetBSD 10.1, nginx on the host
  • Debian Host → Debian 13.2, nginx on the host
  • Alpine Host → Alpine 3.22, nginx on the host

Everything uses the same nginx configuration file and the same static site.

Static HTTP results

Let us start with plain HTTP, since this removes TLS from the picture and focuses on the kernel, network stack and nginx itself.

HTTP, 4 threads, 50 concurrent connections

Approximate median wrk results:

Environment HTTP 50 connections
SmartOS Debian LX ~46.2 k
SmartOS Alpine LX ~49.2 k
SmartOS Native ~63.7 k
FreeBSD Jail ~63.9 k
OpenBSD Host ~64.1 k
NetBSD Host ~64.0 k
Debian Host ~63.8 k
Alpine Host ~63.9 k

Two things stand out:

  1. All the native or jail/container setups on the hosts that are not LX zones cluster around 63 to 64k requests per second.
  2. The two SmartOS LX zones sit slightly lower, in the 46 to 49k range, which is still very respectable for this hardware.

In other words, as long as you are on the host or in something very close to it (FreeBSD jail, SmartOS native zone, NetBSD, OpenBSD, Linux on bare metal), static HTTP on nginx will happily max out around 64k requests per second with this small Intel N150 CPU.

The Debian and Alpine LX zones on SmartOS are a bit slower, but not dramatically so. They still deliver close to 50k requests per second and, in a real world scenario, you would probably saturate the network or the client long before hitting those numbers.

HTTP, 4 threads, 10 concurrent connections

With fewer concurrent connections, absolute throughput drops, but the relative picture is similar:

  • SmartOS Native around 44k
  • NetBSD and Alpine Host around 34 to 35k
  • FreeBSD, Debian, OpenBSD around 31 to 33k
  • The SmartOS LX zones sit slightly below, around 35 to 37k req/s

The important conclusion is simple:

For plain HTTP static hosting, once nginx is installed and correctly configured, the choice between these operating systems makes very little difference on this hardware. Zones and jails add negligible overhead, LX zones add a small one.

If you are only serving static content over HTTP, your choice of OS should be driven by other factors: ecosystem, tooling, update strategy, your own expertise and preference.

Static HTTPS results

TLS is where things start to diverge more clearly and where CPU utilization becomes interesting.

HTTPS, 4 threads, 50 concurrent connections

Approximate medians:

Environment HTTPS 50 connections CPU notes at 50 HTTPS connections
SmartOS Debian LX ~51.4 k CPU saturated
SmartOS Alpine LX ~40.4 k CPU saturated
SmartOS Native ~52.8 k CPU saturated
FreeBSD Jail ~62.9 k around 60% CPU idle
OpenBSD Host ~39.7 k CPU saturated
NetBSD Host ~40.4 k CPU at 100%
Debian Host ~62.8 k about 20% CPU idle
Alpine Host ~62.4 k small idle headroom, around 7% idle

These numbers tell a more nuanced story.

  1. FreeBSD, Debian and Alpine on bare metal form a “fast TLS” group.
    All three sit around 62 to 63k requests per second with 50 concurrent HTTPS connections.

  2. FreeBSD does this while using significantly less CPU.
    During the HTTPS tests with 50 connections, the FreeBSD host still had around 60% CPU idle. It is the platform that handled TLS load most comfortably in terms of CPU headroom.

  3. Debian and Alpine are close in throughput, but push the CPU harder.
    Debian still had some idle time left, Alpine even less. In practice, all three are excellent here, but FreeBSD gives you more room before you hit the wall.

  4. SmartOS, NetBSD and OpenBSD form a “good but heavier” TLS group.
    Their HTTPS throughput is in the 40 to 52k req/s range and they reach full CPU usage at 50 concurrent connections. OpenBSD and NetBSD stabilize around 39 to 40k req/s. SmartOS native and the Debian LX zone manage slightly better (around 51 to 53k) but still with the CPU pegged.

HTTPS, 4 threads, 10 concurrent connections

With lower concurrency:

  • FreeBSD, Debian and Alpine still sit in roughly the 29 to 31k req/s range
  • SmartOS Native and LX zones are in the mid to high 30k range
  • NetBSD and OpenBSD sit around 26 to 27k req/s

The relative pattern is the same: for this TLS workload, FreeBSD and modern Linux distributions on bare metal appear to make better use of the cryptographic capabilities of the CPU, delivering higher throughput or more headroom or both.

What TLS seems to highlight

The HTTPS tests point to something that is not about nginx itself, but about the TLS stack and how well it can exploit the hardware.

On this Intel N150, my feeling is:

  • FreeBSD, with the userland and crypto stack I am running, is very efficient at TLS here. It delivers the highest throughput while keeping plenty of CPU in reserve.
  • Debian and Alpine, with their recent kernels and libraries, are also strong performers, close to FreeBSD in throughput, but with less idle CPU.
  • NetBSD, OpenBSD and SmartOS (native and LX) are still perfectly capable of serving a lot of HTTPS traffic, but they have to work harder to keep up and they hit 100% CPU much earlier.

This matches what I see in day to day operations: TLS performance is often less about “nginx vs something else” and more about the combination of:

  • the TLS library version and configuration
  • how well the OS uses the CPU crypto instructions
  • kernel level details in the network and crypto paths

I suspect the differences here are mostly due to how each system combines its TLS stack (OpenSSL, LibreSSL and friends), its kernel and its hardware acceleration support. It would take a deeper dive into profiling and configuration knobs to attribute the gaps precisely.

In any case, on this specific mini PC, if I had to pick a platform to handle a large amount of HTTPS static traffic, FreeBSD, Debian and Alpine would be my first candidates, in that order.

Zones, jails and containers: overhead in practice

Another interesting part of the story is the overhead introduced by different isolation technologies.

From these tests and the previous virtualization article on the same N150 machine , the picture is consistent:

  • FreeBSD jails behave almost like bare metal.
    For both HTTP and HTTPS, running nginx in a jail on FreeBSD 14.3-RELEASE produces numbers practically identical to native hosts on other OSes. CPU utilization is excellent, especially under TLS.

  • SmartOS native zones are also very close to the metal.
    Static HTTP performance reaches the same 64k req/s region and HTTPS is only slightly behind the “fast TLS” group, although with higher CPU usage.

  • SmartOS LX zones introduce a noticeable but modest overhead.
    Both Debian and Alpine LX zones on SmartOS perform slightly worse than the native zone or FreeBSD jails. For static HTTP they are still very fast. For HTTPS the Debian LX zone remains competitive but costs more CPU, while the Alpine LX zone is slower.

This is not a surprise. LX zones translate Linux system calls on top of the illumos kernel and there is a cost for that. The important point is that the cost is not catastrophic. On a bigger CPU you would probably not notice it unless you are really pushing the limits.

What this means for real workloads

It is easy to get lost in tables and percentages, so let us go back to the initial question.

A client wants static hosting.
Does the choice between FreeBSD, SmartOS, NetBSD or Linux matter in terms of performance?

For plain HTTP on this hardware, with nginx and the same configuration:

  • Not really.
    All the native hosts and FreeBSD jails deliver roughly the same maximum throughput, in the 63 to 64k req/s range. SmartOS LX zones are slightly slower but still strong.

For HTTPS :

  • Yes, it starts to matter a bit more.
  • FreeBSD stands out for how relaxed the CPU is under high TLS load.
  • Debian and Alpine are very close in throughput, with more CPU used but still with some headroom.
  • SmartOS, NetBSD and OpenBSD can still push a lot of HTTPS traffic, but they reach 100% CPU earlier and stabilize at lower request rates.

Does this mean you should always choose FreeBSD or Debian or Alpine for static HTTPS hosting?

Not necessarily.

In real deployments, the bottleneck is rarely the TLS performance of a single node serving a small static site. Network throughput, storage, logging, reverse proxies, CDNs and application layers all play a role.

However, knowing that FreeBSD and current Linux distributions can squeeze more out of a small CPU under TLS is useful when you are:

  • sizing hardware for small VPS nodes that must serve many HTTPS requests
  • planning to consolidate multiple services on a low power box
  • deciding whether you can afford to keep some CPU aside for other tasks (cache, background jobs, monitoring, and so on)

As always, the right answer depends on the complete picture: your skills, your tooling, your backups, your monitoring, the rest of your stack, and your tolerance for troubleshooting when things go sideways.

Final thoughts

From these small tests, my main takeaways are:

  1. Static HTTP is basically solved on all these platforms.
    On a modest Intel N150, every system tested can push around 64k static HTTP requests per second with nginx set to almost default settings. For many use cases, that is already more than enough.

  2. TLS performance is where the OS and crypto stack start to matter.
    FreeBSD, Debian and Alpine squeeze more HTTPS requests out of the N150, and FreeBSD in particular does it with a surprising amount of idle CPU left. NetBSD, OpenBSD and SmartOS need more CPU to reach similar speeds and stabilize at lower throughput once the CPU is saturated.

  3. Jails and native zones are essentially free, LX zones cost a bit more.
    FreeBSD jails and SmartOS native zones show very little overhead for this workload. SmartOS LX zones are still perfectly usable, but if you are chasing every last request per second you will see the cost of the translation layer.

  4. Benchmarks are only part of the story.
    If your team knows OpenBSD inside out and has tooling, scripts and workflows built around it, you might happily accept using more CPU on TLS in exchange for security features, simplicity and familiarity. The same goes for NetBSD or SmartOS in environments where their specific strengths shine.

I will not choose an operating system for a client just because a benchmark looks nicer. These numbers are one of the many inputs I consider. What matters most is always the combination of reliability, security, maintainability and the human beings who will have to operate the
system at three in the morning when something goes wrong.

Still, it is nice to know that if you put a tiny Intel N150 in front of a static site and you pick FreeBSD or a modern Linux distribution for HTTPS, you are giving that little CPU a fair chance to shine.

Introducing Apache Fory™ Rust: A Versatile Serialization Framework for the Modern Age

Lobsters
fory.apache.org
2025-11-19 09:02:16
Comments...
Original Article

TL;DR : Apache Fory Rust is a blazingly-fast, cross-language serialization framework that delivers ultra-fast serialization performance while automatically handling circular references, trait objects, and schema evolution . Built with Rust's safety guarantees and zero-copy techniques, it's designed for developers who refuse to compromise between performance and developer experience.

Apache Fory Logo


The Serialization Dilemma

Every backend engineer has faced this moment: your application needs to serialize complex data structures such as nested objects, circular references, polymorphic types, and you're forced to choose between three bad options:

  1. Fast but fragile : Hand-rolled binary formats that break with schema changes
  2. Flexible but slow : JSON/Protocol with 10x performance overhead
  3. Complex and limiting : Existing solutions that don't support your language's advanced features

Apache Fory Rust eliminates this false choice. It's a serialization framework that delivers exceptional performance while automatically handling the complexities of modern applications—no IDL files, no manual schema management, no compromises.

What Makes Apache Fory Rust Different?

1. Truly Cross-Language

Apache Fory Rust speaks the same binary protocol as Java, Python, C++, Go, and other language implementations. Serialize data in Rust, deserialize in Python — it just works . No schema files. No code generation. No version mismatches.

// Rust: Serialize
let user = User {
name: "Alice".to_string(),
age: 30,
metadata: HashMap::from([("role", "admin")]),
};
let bytes = fory.serialize(&user);

// Python: Deserialize (same binary format!)
user = fory.deserialize(bytes) # Just works!

This isn't just convenient — it changes how we develop microservices architectures where different teams use different languages.

2. Automatic Shared/Circular Reference Handling

Most serialization frameworks panic when encountering circular references. Apache Fory tracks and preserves reference identity automatically:

Shared Reference :

use fory::Fory;
use std::rc::Rc;

let fory = Fory::default();

// Create a shared value
let shared = Rc::new(String::from("shared_value"));

// Reference it multiple times
let data = vec![shared.clone(), shared.clone(), shared.clone()];

// The shared value is serialized only once
let bytes = fory.serialize(&data);
let decoded: Vec<Rc<String>> = fory.deserialize(&bytes)?;

// Verify reference identity is preserved
assert_eq!(decoded.len(), 3);
assert_eq!(*decoded[0], "shared_value");

// All three Rc pointers point to the same object
assert!(Rc::ptr_eq(&decoded[0], &decoded[1]));
assert!(Rc::ptr_eq(&decoded[1], &decoded[2]));

Circular Reference :

use fory::{ForyObject, RcWeak};

#[derive(ForyObject)]
struct Node {
value: i32,
parent: RcWeak<RefCell<Node>>, // Weak pointer breaks cycles
children: Vec<Rc<RefCell<Node>>>, // Strong references tracked
}

// Build a parent-child tree with circular references
let parent = Rc::new(RefCell::new(Node { ... }));
let child = Rc::new(RefCell::new(Node {
parent: RcWeak::from(&parent), // Points back to parent
...
}));
parent.borrow_mut().children.push(child.clone());

// Serialization handles the cycle automatically
let bytes = fory.serialize(&parent);
let decoded: Rc<RefCell<Node>> = fory.deserialize(&bytes)?;

// Reference relationships preserved!
assert!(Rc::ptr_eq(&decoded, &decoded.borrow().children[0].borrow().parent.upgrade().unwrap()));

This isn't just a feature—it's essential for graph databases, object-relational mappers, and domain models.

3. Trait Object Serialization

Rust's trait system enables powerful abstractions, but serializing Box<dyn Trait> is notoriously difficult. Apache Fory makes it trivial:

use fory::{ForyObject, Serializer, register_trait_type};

trait Animal: Serializer {
fn speak(&self) -> String;
}

#[derive(ForyObject)]
struct Dog { name: String, breed: String }

#[derive(ForyObject)]
struct Cat { name: String, color: String }

// Register implementations
register_trait_type!(Animal, Dog, Cat);

// Serialize heterogeneous collections
let animals: Vec<Box<dyn Animal>> = vec![
Box::new(Dog { ... }),
Box::new(Cat { ... }),
];

let bytes = fory.serialize(&animals);
let decoded: Vec<Box<dyn Animal>> = fory.deserialize(&bytes)?;

// Polymorphism preserved!
decoded[0].speak(); // "Woof!"
decoded[1].speak(); // "Meow!"

Alternative: Using dyn Any without trait registration :

use std::rc::Rc;
use std::any::Any;

// No trait definition or registration needed
let dog: Rc<dyn Any> = Rc::new(Dog { name: "Rex".to_string(), breed: "Labrador".to_string() });
let cat: Rc<dyn Any> = Rc::new(Cat { name: "Whiskers".to_string(), color: "Orange".to_string() });

let bytes = fory.serialize(&dog);
let decoded: Rc<dyn Any> = fory.deserialize(&bytes)?;

// Downcast to concrete type
let unwrapped = decoded.downcast_ref::<Dog>().unwrap();
assert_eq!(unwrapped.name, "Rex");

Supports :

  • Box<dyn Trait> - Owned trait objects
  • Rc<dyn Trait> / Arc<dyn Trait> - Reference-counted trait objects
  • Rc<dyn Any> / Arc<dyn Any> - Runtime type dispatch without traits
  • Auto-generated wrapper types for standalone serialization

This unlocks plugin systems, heterogeneous collections, and extensible architectures that were previously impossible to serialize.

4. Schema Evolution Without Breaking Changes

Microservices evolve independently. Apache Fory's Compatible mode allows schema changes without coordination:

use fory::{Fory, ForyObject};

// Service A: Version 1
#[derive(ForyObject)]
struct User {
name: String,
age: i32,
address: String,
}

let mut fory_v1 = Fory::default().compatible(true);
fory_v1.register::<User>(1);

// Service B: Version 2 (evolved independently)
#[derive(ForyObject)]
struct User {
name: String,
age: i32,
// address removed
phone: Option<String>, // New field
metadata: HashMap<String, String>, // Another new field
}

let mut fory_v2 = Fory::default().compatible(true);
fory_v2.register::<User>(1);

// V1 data deserializes into V2 structure
let v1_bytes = fory_v1.serialize(&user_v1);
let user_v2: User = fory_v2.deserialize(&v1_bytes)?;
// Missing fields get default values automatically

Compatibility rules :

  • ✅ Add new fields (default values applied)
  • ✅ Remove fields (skipped during deserialization)
  • ✅ Reorder fields (matched by name)
  • ✅ Change nullability ( T Option<T> )
  • ❌ Type changes (except nullable variants)

This is critical for zero-downtime deployments and polyglot microservices.

The Technical Foundation

Protocol Design

Apache Fory uses a sophisticated binary protocol designed for both performance and flexibility:

| fory header | reference meta | type meta | value data |

Key innovations :

  1. Efficient encoding : Variable-length integers, compact type IDs, bit-packed flags
  2. Reference tracking : Deduplicates shared objects automatically (serialize once, reference thereafter)
  3. Meta compression : Gzip compression for type metadata in meta-sharing mode
  4. Little-endian layout : Optimized for modern CPU architectures

Compile-Time Code Generation

Unlike reflection-based frameworks, Apache Fory generates serialization code at compile time via procedural macros:

use fory::ForyObject;

#[derive(ForyObject)]
struct Person {
name: String,
age: i32,
address: Address,
}

// Macro generates:
// - fory_write_data() for serialization
// - fory_read_data() for deserialization
// - fory_reserved_space() for buffer pre-allocation
// - fory_get_type_id() for type registration

Benefits :

  • Zero runtime overhead : No reflection, no vtable lookups
  • 🛡️ Type safety : Compile-time errors instead of runtime panics
  • 📦 Small binary size : Only code for types you actually use
  • 🔍 IDE support : Full autocomplete and error checking

Architecture

Apache Fory Rust consists of three focused crates:

fory/            # High-level API
└─ Convenience wrappers, derive re-exports

fory-core/ # Core serialization engine
├─ fory.rs # Main entry point
├─ buffer.rs # Zero-copy binary I/O
├─ serializer/ # Type-specific serializers
├─ resolver/ # Type registration & dispatch
├─ meta/ # Meta string compression
└─ row/ # Row format implementation

fory-derive/ # Procedural macros
├─ object/ # ForyObject derive macro
└─ fory_row.rs # ForyRow derive macro

This modular design ensures clean separation of concerns and makes the codebase maintainable.

Benchmarks: Real-World Performance

Datatype Size Operation Fory TPS JSON TPS Protobuf TPS Fastest
company small serialize 10,063,906 761,673 896,620 fory
company medium serialize 412,507 33,835 37,590 fory
company large serialize 9,183 793 880 fory
ecommerce_data small serialize 2,350,729 206,262 256,970 fory
ecommerce_data medium serialize 59,977 4,699 5,242 fory
ecommerce_data large serialize 3,727 266 295 fory
person small serialize 13,632,522 1,345,189 1,475,035 fory
person medium serialize 3,839,656 337,610 369,031 fory
person large serialize 907,853 79,631 91,408 fory
simple_list small serialize 27,726,945 4,874,957 4,643,172 fory
simple_list medium serialize 4,770,765 401,558 397,551 fory
simple_list large serialize 606,061 41,061 44,565 fory
simple_map small serialize 22,862,369 3,888,025 2,695,999 fory
simple_map medium serialize 2,128,973 204,319 193,132 fory
simple_map large serialize 177,847 18,419 18,668 fory
simple_struct small serialize 35,729,598 10,167,045 8,633,342 fory
simple_struct medium serialize 34,988,279 9,737,098 6,433,350 fory
simple_struct large serialize 31,801,558 4,545,041 7,420,049 fory
system_data small serialize 5,382,131 468,033 569,930 fory
system_data medium serialize 174,240 11,896 14,753 fory
system_data large serialize 10,671 876 1,040 fory

When to Use Apache Fory Rust

Ideal Use Cases

  1. Microservices with polyglot teams

    • Different services in different languages
    • Need seamless data exchange without schema files
    • Schema evolution across independent deployments
  2. High-performance data pipelines

    • Processing millions of records per second
    • Memory-constrained environments (use row format)
    • Analytics workloads with selective field access
  3. Complex domain models

    • Circular references (parent-child relationships, graphs)
    • Polymorphic types (trait objects, inheritance hierarchies)
    • Rich object graphs with shared references
  4. Real-time systems

    • Low-latency requirements ( <1ms serialization)
    • Memory-mapped file access
    • Zero-copy deserialization critical

⚠️ Consider Alternatives If

  1. You need human-readable data : Use JSON/YAML for debugging
  2. You need long-term storage format : Use Parquet for data lakes
  3. Your data is trivial : serde + bincode is simpler for basic types

Getting Started in 5 Minutes

Installation

Add to Cargo.toml :

[dependencies]
fory = "0.13"

Basic Object Serialization

use fory::{Fory, Error, ForyObject};

#[derive(ForyObject, Debug, PartialEq)]
struct User {
name: String,
age: i32,
email: String,
}

fn main() -> Result<(), Error> {
let mut fory = Fory::default();
fory.register::<User>(1); // Register with unique ID
let user = User {
name: "Alice".to_string(),
age: 30,
email: "alice@example.com".to_string(),
};
// Serialize
let bytes = fory.serialize(&user);
// Deserialize
let decoded: User = fory.deserialize(&bytes)?;
assert_eq!(user, decoded);
Ok(())
}

Cross-Language Serialization

use fory::Fory;

// Enable cross-language mode
let mut fory = Fory::default().compatible(true).xlang(true);

// Register with id/namespace for cross-language compatibility
fory.register_by_namespace::<User>(1);
// fory.register_by_namespace::<User>("example", "User");

let bytes = fory.serialize(&user);
// This can now be deserialized in Java, Python, Go, etc.

Register types with consistent IDs or names across all languages:

  • By ID ( fory.register::<User>(1) ): Faster serialization, more compact encoding, but requires coordination to avoid ID conflicts
  • By name ( fory.register_by_name::<User>("example.User") ): More flexible, less prone to conflicts, easier to manage across teams, but slightly larger encoding

Supported Types

Apache Fory Rust supports a comprehensive type system:

Primitives : bool , i8 , i16 , i32 , i64 , f32 , f64 , String

Collections : Vec<T> , HashMap<K,V> , BTreeMap<K,V> , HashSet<T> , Option<T>

Smart Pointers : Box<T> , Rc<T> , Arc<T> , RcWeak<T> , ArcWeak<T> , RefCell<T> , Mutex<T>

Date/Time : chrono::NaiveDate , chrono::NaiveDateTime

Custom Types : Derive ForyObject for object graphs, ForyRow for row format

Trait Objects : Box<dyn T> , Rc<dyn T> , Arc<dyn T> , Rc<dyn Any> , Arc<dyn Any>

Roadmap: What's Next

Apache Fory Rust is production-ready today, but we're just getting started and continuing active development:

Shipped in v0.13

  • ✅ Static codegen via procedural macros
  • ✅ Row format serialization with zero-copy
  • ✅ Cross-language object graph serialization
  • ✅ Shared and circular reference tracking
  • ✅ Weak pointer support (RcWeak, ArcWeak)
  • ✅ Trait object serialization (Box/Rc/Arc)
  • ✅ Schema evolution in compatible mode

🚧 Coming Soon

  • Cross-language reference serialization : serialize Rc/Arc to/from other languages.
  • Partial row updates : Mutate row format in-place

🎯 Help Wanted

We're actively seeking contributors for:

  • Performance tuning : Profile and optimize hot paths
  • Documentation : More examples, tutorials, and guides
  • Testing : Fuzzing, property tests, edge case coverage

Production Considerations

Thread Safety

Fory becomes fully thread-safe after registration is complete. Once every type is registered (which requires &mut Fory ), wrap the instance in an Arc and freely share it across worker threads for concurrent serialization and deserialization.

use fory::Fory;
use std::{sync::Arc, thread};

let mut fory = Fory::default();
fory.register::<Item>(1)?;
let fory = Arc::new(fory); // `Fory` is Send + Sync once registration is done

let item = Item::default();
let handles: Vec<_> = (0..4)
.map(|_| {
let fory = Arc::clone(&fory);
let input = item.clone();
thread::spawn(move || {
let bytes = fory.serialize(&input);
let decoded: Item = fory.deserialize(&bytes).expect("valid data");
(bytes, decoded)
})
})
.collect();

for handle in handles {
let (bytes, decoded) = handle.join().expect("thread finished");
// work with `bytes` / `decoded`
}

Error Handling

Apache Fory uses Result<T, Error> for all fallible operations:

use fory::Error;

match fory.deserialize::<User>(&bytes) {
Ok(user) => process_user(user),
Err(Error::TypeMismatch) => log::error!("Schema mismatch"),
Err(Error::BufferTooShort) => log::error!("Incomplete data"),
Err(e) => log::error!("Deserialization failed: {}", e),
}

Documentation

Community and Contribution

Apache Fory is an Apache Software Foundation project with a vibrant, growing community:

How to Contribute

We welcome contributions of all kinds:

  1. Code : Implement features from the roadmap
  2. Docs : Write tutorials, examples, and guides
  3. Testing : Add benchmarks, fuzz tests, integration tests
  4. Feedback : Report bugs, request features, share use cases

See CONTRIBUTING.md for guidelines.

License

Apache Fory is licensed under the Apache License 2.0 , a permissive open-source license that allows commercial use, modification, and distribution.

Conclusion

Apache Fory Rust represents a paradigm shift in serialization:

  • No more trade-offs : Get performance and flexibility
  • No more boilerplate : Derive macros handle the complexity
  • No more lock-in : Trait-object and shared reference support by nature

Whether you're building microservices, data pipelines, or real-time systems, Apache Fory Rust delivers the performance you need with the ergonomics you deserve.

Try it today :

Join the community :

git clone https://github.com/apache/fory.git
cd fory/rust
cargo test --features tests

Share your experience :

  • Write a blog post about your use case
  • Present at your local Rust meetup
  • Contribute benchmarks from your domain

When high availability brings downtime

Lobsters
medium.com
2025-11-19 08:19:24
Comments...

Quoting Matthew Prince

Simon Willison
simonwillison.net
2025-11-19 08:02:36
Cloudflare's network began experiencing significant failures to deliver core network traffic [...] triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in tur...
Original Article

Cloudflare's network began experiencing significant failures to deliver core network traffic [...] triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network. [...] The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.

Matthew Prince , Cloudflare outage on November 18, 2025

Exploring the Limits of Large Language Models as Quant Traders

Hacker News
nof1.ai
2025-11-19 07:36:25
Comments...
Original Article

|

iad1::1763540795-91VHMxMiYMSg58rPP45HWq88pmrPTfz4

What nicotine does to your brain

Hacker News
economist.com
2025-11-19 07:20:33
Comments...

A down detector for down detector's down detector

Hacker News
downdetectorsdowndetectorsdowndetector.com
2025-11-19 07:09:52
Comments...
Original Article

A tiny independent status checker.

Use smart tech, turn heat down, service boilers: how to save money on energy bills

Guardian
www.theguardian.com
2025-11-19 07:00:09
From turning down thermostats to make savings to installing reflectors to push warmth back into your home “When it comes to staying warm and saving energy, small changes can make a big difference,” says Sarah Pennells, a consumer finance specialist at the investment company Royal London. Continue re...
Original Article

Use smart tech

“When it comes to staying warm and saving energy, small changes can make a big difference,” says Sarah Pennells, a consumer finance specialist at the investment company Royal London.

First, use a timer if your boiler or thermostat has one.

For more control, you could switch to a smart thermostat , which will connect your heating system to the internet. This will let you control your thermostat remotely, usually through a mobile app, meaning you can turn your heating off or down if you have had an unexpected change of plan. A smart thermostat effectively adds a timer to a boilerYou can then use the app to schedule heating and hot water.

Smart thermostats vary and can offer different features, including multi-room control, hot water control, and “geofencing” to track when you leave and enter your home. They typically cost between £60 and £250, depending on the make and model.

Woman uses a mobile phone while sitting on a sofa
Switching to a smart thermostat
allows you to control your thermostat remotely, usually through a mobile app.
Photograph: StefaNikolic/Getty Images

Some can be fitted without help, such as the Bosch Room Thermostat II ( £69,99 on Amazon); others, such as the Hive Thermostat V4 ( £155 on B&Q) require professional fitting. This can normally be booked through the retailer, but may incur an additional cost.

Some energy suppliers offer deals on smart thermostats made by companies they have partnerships with, such as Octopus Energy’s partnership with tado° , which gives customers up to 50% off tado° products. A Wireless Smart Thermostat X Starter Kit is £112, down from £159.99.

Reduce temperatures

Turning down your thermostat from 22C to 21C can save a typical household £90 a year in Great Britain, according to the Energy Saving Trust . For most people, a comfortable temperature falls somewhere between 18C and 21C.

According to Citizens Advice , every degree you turn your thermostat down can save you about 10% on your energy bill, but it warns that if you are elderly or have a health condition you should not set it below 21C.

A woman adjusts smart thermostat controls on a wall
For most people, a comfortable temperature in the home is between 18C and 21C. Photograph: Ridofranz/Getty Images

It also advises that having the heating on all the time at a low temperature will cost more than having it on at a slightly higher temperature but for a shorter interval.

Setting the heating to go off 30 minutes before you leave the house or go to sleep will also reduce your bill.

Lower the flow

If you have a combi boiler, you can lower its flow temperature – that is, the temperature of the water flowing out and to the radiators.

If you have a system boiler or a hot water cylinder, EDF Energy advises you don’t do this on your own but get advice from an engineer.

On many boilers the flow temperature is set too high by default, at about 75-80C. Reducing it to about 60C can cut your gas bill without making much difference to how warm you feel.

“This is especially effective in homes with well-sized radiators and good insulation … and doesn’t noticeably affect comfort,” Pennells says.

The charity Nesta has an interactive online tool that gives detailed instructions on how to change the setting on your boiler. The charity recommends you take a picture of the boiler controls and settings before proceeding so that you have a record of your initial settings.

Turn down radiators

If you have thermostatic radiator valves (TRVs), the dial that controls how much hot water enters the radiator it is attached to, you will be able to adjust the temperature separately in each room. TRVs usually have a scale from 0 to 6, where 0 is off and 6 is fully open.

The Energy Saving Trust recommends putting it on the lowest setting that keeps the room at a comfortable temperature, which could be 3 or 4 for rooms you use the most and 2 or 3 for others. It says adding TRVs to a system that already has a programmer and thermostat can save a household £35 a year.

While it may be tempting to turn off your heating in the name of saving cash, experts say this can lead to mould and damp – and this could be more costly and dangerous to deal with in the long term.

“During the energy crisis, we’ve found that people have changed their behaviours and started to warm the person rather than the house,” says Sophie Burr, the project development coordinator at National Energy Action . “Our research has shown that it is more cost-effective to heat the whole space while turning the radiators down to number 2 in rooms you are not using, allowing some heat in those rooms and preventing the growth of mould spores, which can have serious health impacts such as worsening respiratory conditions.”

Get reflectors

The British Gas Energy Trust advises that you use foil behind radiators to reflect heat back into the room. Radiator reflectors allow that heat to be dispersed into the room rather than just get absorbed by an external wall, as 35% of the heat in a room is lost through the walls. Reflectors are particularly beneficial on uninsulated external walls.

Although an initial additional cost, they are fairly cheap to buy, easy to install and should last. They can be bought in rolls, and then cut to fit any radiator. They are easy to apply using a provided adhesive or double-sided tape – just make sure the radiators are off and cool first. Screwfix sells a 1.88 sq metre roll for £7.51 , B&Q has a 5 sq metre roll for £14.97 and Amazon sells a 15 sq metre one for £27.99 .

Bleed your radiators every few months so they run efficiently. To do this, make sure your radiators are off and cool. Insert a key ( £3.50 for a two-pack from B&Q) or a flat-head screwdriver into the bleed valve on the radiator, usually at the top corner, and turn it anti-clockwise. You should hear a hissing sound as the air escapes but wait for the hissing to stop and a steady stream of water to appear (which you can catch with a cloth) before turning the valve clockwise to close it again.

Controls on a domestic gas boiler.
A boiler service will improve its efficiency. Photograph: Joe Giddens/PA

Avoid covering your radiators with furniture or curtains – especially under windows. This will help the heat spread wider.

Keep your boiler serviced

A boiler service will improve its efficiency and extend its lifespan by catching and fixing minor problems. Octopus Energy says an unserviced boiler can use up to 10% more energy than one that is annually checked out. “If a boiler is not serviced regularly, it can have a significant impact on fuel efficiency and health,” Burr says.

According to Which?, a standard boiler service costs between £70 and £110.

Some energy companies offer a service as part of their annual cover plans – British Gas, for example, has one in its HomeCare policy, which starts at £19 a month. However, a boiler care plan isn’t for everyone. Which? says that while a cover plan can give peace of mind, sometimes the monthly payments add up to more than paying for an annual service and repairs when needed. It recommends weighing up if you have enough savings to pay for an annual service outright every time.

If you live in rented housing, it is your landlord’s responsibility to ensure that the boiler is serviced and certified yearly. “Annual boiler servicing for all homes in the rental sector is a requirement by law,” Burr says. “For properties with gas boilers, this must be carried out by a Gas Safe engineer, and for oil boilers, by an Oftec-certified engineer. Annual boiler servicing will ensure that the appliance is working efficiently and is not leaking carbon monoxide into your home.”

Improving NAT traversal, part 2: challenges in cloud environments

Lobsters
tailscale.com
2025-11-19 03:49:59
Comments...
Original Article

This is the second in a series of posts about how Tailscale makes secure connections between devices, and the challenges on the path between them. Read on for insights into connecting into one of the trickiest space around, commercial clouds. And stay tuned for future posts about where direct, secure networking might be headed.

See the first post in this series for some context on how certain NATs make direct connections tricky, and how Tailscale has worked to improve the landscape.

One domain where NAT traversal remains especially challenging is the public cloud. Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and other large-scale clouds provide NAT solutions for instances in private subnets—and these tend to be the hardest type of NAT to work through, from a peer-to-peer perspective.

Cloud NAT gateways are typically designed with goals like scalability, security isolation, and simplicity for outbound access. They are not particularly tailored for inbound peer connectivity or hole punching.

What follows is a rundown of the cloud NAT issues we face now, how Tailscale and other technologies can work with (or sometimes around) them, and some proposed changes, by us and cloud providers, that could appease all sides, while improving and securing everybody’s platform.

Cloud NAT gateways: symmetric by design

Cloud NAT gateways are designed for scale and reliability of outbound connections, not for peer-to-peer. That makes them highly symmetric by default, which is about the worst case for NAT traversal. A brief summary of the big three:

AWS NAT Gateway

Always symmetric. Each connection gets a randomized source port, per-destination. Scales well for outbound traffic, terrible for peer-to-peer. If both peers are behind AWS NAT, direct tunnels almost never form, so DERP is the fallback.

Azure NAT Gateway

Same story: symmetric by default, randomized port assignment. Azure emphasizes recycling and scaling, not stable mappings. Some limited reuse exists for the same destination IP, but that doesn’t help Tailscale peers.

Google Cloud NAT

Default is also symmetric, but Google uniquely offers an Endpoint-Independent Mapping option if you configure static port allocation. That makes it more friendly for peer-to-peer (P2P), though at the cost of connection scale. By default, though, GCP behaves like AWS and Azure.

Options for better NAT behavior on clouds

If you have servers or containers in the cloud and you want to maximize peer-to-peer performance with Tailscale (or any similar P2P system), what can you do? There are a few approaches:

Give the instance a public ip (and open the firewall)

This is often the simplest and most effective solution. If you assign a public IPv4 address to your virtual machine (VM), and ensure its security group or firewall allows UDP traffic on the WireGuard port, you eliminate the cloud NAT from the equation. Your instance is then just behind its own host firewall, which can be configured for endpoint-independent UDP. Tailscale strongly recommends this for critical infrastructure nodes—essentially turning the “cloud NAT” scenario into a “no NAT” scenario​. On AWS, this means using an Elastic IP or public IP on the instance, instead of routing out through a NAT Gateway. On GCP or Azure, it means an instance NIC with a public IP assignment.

You don’t even need to allow inbound traffic from anywhere; you can restrict inbound UDP port 41641 (or whatever port Tailscale is using) to the IP ranges your other nodes are in. Even with leaving that port open, the WireGuard traffic is authenticated and encrypted. With a public IP, Tailscale will directly coordinate the two endpoints and often get a direct connection (or even use IPv6 if both have it). Many users treat their cloud instances like “virtual servers” and give them public IP addresses anyway, and Tailscale can take advantage of that for easy, direct links.

Four diagram boxes against a blue background: "Main Office" (inside a Double NAT), "Remote Workers," "AWS VPC," and "Digital Ocean VPC." There are interconnections between the workers, the AWS instance (Server, Workload, and Service), and the Digital Ocean instance.
A hybrid cloud can be messy, but there are ways to simplify it.

Use a custom NAT instance or firewall VM

Instead of the managed Cloud NAT services, some advanced users deploy their own NAT gateway using a Linux instance or a virtual router appliance, like pfSense or OPNsense. The advantage here is you can configure the NAT behavior.

For example, a Linux NAT instance using iptables/nf_conntrack in netfilter typically preserves source ports for UDP as long as there’s no conflict, which is effectively endpoint-independent mapping. You could also run something like pfSense with the new ( Tailscale-sponsored ) endpoint-independent NAT option enabled. This way, your cloud VMs still don’t each need public IP addresses, but the NAT they share is under your control, and can be made P2P-friendly. The downside is you’re foregoing the simplicity and scalability of the cloud provider’s native solution. In other words, you’ll have to manage this VM: ensure it’s redundant for HA, handle updates, handle the throughput limits of the instance, and so on.

AWS actually used to suggest stand-alone NAT instances before its managed NAT gateway existed, but it’s more work.

Leverage cloud provider features (if available)

As noted, GCP Cloud NAT can be configured for endpoint-independent mapping by using static port allocation for your VM. If you’re on GCP, you could enable that to improve the odds of direct Tailscale connectivity. The trade-off is you must allocate a block of ports to the VM, which requires predicting your connection needs (to avoid running out of ports)​. Azure currently doesn’t offer a user-facing setting to make their NAT less symmetric, but Azure does have the concept of instance-level public IP addresses and load balancers.

In Azure or AWS, another trick is to use a UDP load balancer (like AWS’ Network Load Balancer or Azure’s Standard Load Balancer ) to forward a UDP port to your instance. For example, you could set up a Network Load Balancer that listens on UDP port 41641 and forwards to your VM on port 41641. This effectively gives your instance a stable UDP port on a public IP without exposing everything. Tailscale nodes on the internet might then be able to reach it directly.

This approach is a bit complex and isn’t officially supported by Tailscale, since the node itself wouldn’t know it’s reachable at that forwarded address unless you manually advertise it. But it’s an option for tinkerers who can’t use a full public IP, but want an incoming hole for Tailscale.

Run a subnet router or exit node

If making each instance reachable through cloud NATs is too much work, one compromise is to have one well-connected node in the cloud act as a subnet router or exit node , and let other instances reach the rest of your tailnet through it. For instance, you could run Tailscale on a small VM that has a public IP, and use Tailscale’s subnet routing feature to allow other private instances to send traffic through that VM when communicating with end-user devices like laptops and phones.

This won’t provide a true peer-to-peer connection from every instance; traffic between a private instance and your laptop would go through the subnet router node, for instance. But at least that relay is under your control, and likely on a fast connection.

This kind of setup is somewhat analogous to running your own DERP, but at the IP level, within your network. Tailscale’s exit node feature can similarly funnel traffic. However, these approaches introduce a single-point bottleneck, and some management overhead, so they’re usually a last resort, if direct P2P absolutely can’t be attained.

An AWS instance, on a blue globe with an abstract stipple representation of North America, connecting to "Amelie," "AWS EKS," and "Google Cloud" through lines running around the globe.

The private kind of public

The simplest guidance today is: use public IP addresses for cloud nodes when you can. Tailscale is secure enough to expose to the internet (since WireGuard keys are required to talk to it), and doing so sidesteps a lot of NAT complexity. Where that’s not feasible, try to make the NAT as friendly as possible—either via configuration (GCP’s EIM, Azure’s forthcoming features) or by bypassing it with your own solution.

We have already seen cloud networks slowly acknowledge these needs: GCP’s addition of endpoint-independent mode is one example, and AWS might in the future offer some mode for “preserving source ports” if enough customers ask. Currently, AWS seems more focused on scaling connections than optimizing peer-to-peer connectivity.

This is the second post in our series on improving connections across NATs, firewalls, and clouds. Next up: What Tailscale, and the web as a whole, might do to make even more connections direct, reliable, and secure.

DOE gives Microsoft partner $1B loan to restart Three Mile Island reactor

Hacker News
techcrunch.com
2025-11-19 02:02:10
Comments...
Original Article

The Trump administration announced Tuesday it would provide Constellation Energy with a $1 billion loan to restart a nuclear reactor at Three Mile Island.

The energy company said last year it would reopen the reactor , which had been shuttered since 2019, after Microsoft committed to purchasing all the electricity from the 835 megawatt power plant for two decades. Constellation estimated the project would cost $1.6 billion, and it expects to complete the refurbishment in 2028.

Terms of Microsoft’s deal with Constellation weren’t disclosed. Analysts at Jefferies have estimated the tech company might be paying about $110 to $115 per megawatt-hour over 20 years of the deal.

That’s cheaper than a brand-new nuclear power plant would cost, but it’s a hefty premium over wind, solar, and geothermal, according to a comparison of energy costs from Lazard. Even wind and solar projects outfitted with utility-scale batteries to enable 24/7 power are cheaper.

Nonetheless, tech companies have recently fallen in love with nuclear as power demands for their data centers and AI efforts have skyrocketed. This summer, Microsoft competitor Meta signed its own deal with Constellation, buying the “clean energy attributes” of a 1.1 gigawatt nuclear power plant in Illinois.

The reactor at Three Mile Island that’s being restarted isn’t the infamous Unit 2, which melted down in 1979. Rather, it’s Unit 1, which was commissioned in 1974 and taken offline in 2019 as cheap natural gas eroded its profitability.

The debt facility is being made through the Department of Energy’s Loan Programs Office (LPO), which was formed under the Energy Policy Act of 2005 to foster the growth of clean energy technologies.

Techcrunch event

San Francisco | October 13-15, 2026

The LPO is most famous for its loan to Solyndra , a U.S. solar startup that went belly-up during the Great Recession. Overall, though, experts consider the LPO a success, with a default rate of 3.3% after recoveries. Tesla, for instance, received a $465 million loan under the program in 2010 and paid it back by 2013.

Last month, the LPO finalized a $1.6 billion loan to American Electric Power, using federal dollars to support the upgrade of around 5,000 miles of transmission lines.

The Inflation Reduction Act, which passed during the Biden administration, created another pot of money under the LPO known as the Energy Infrastructure Reinvestment program. That program was created to restore existing power plants to operation provided they avoid or reduce pollutants or greenhouse gas emissions. The Trump administration kept it largely in tact, rebranding it the Energy Dominance Financing Program.

In its press release , the Department of Energy, perhaps erroneously, says the the EDF Program was created under the Working Families Tax Cut Act . It was instead authorized under the One Big Beautiful Bill Act .

Tim De Chant is a senior climate reporter at TechCrunch. He has written for a wide range of publications, including Wired magazine, the Chicago Tribune, Ars Technica, The Wire China, and NOVA Next, where he was founding editor.

De Chant is also a lecturer in MIT’s Graduate Program in Science Writing, and he was awarded a Knight Science Journalism Fellowship at MIT in 2018, during which time he studied climate technologies and explored new business models for journalism. He received his PhD in environmental science, policy, and management from the University of California, Berkeley, and his BA degree in environmental studies, English, and biology from St. Olaf College.

You can contact or verify outreach from Tim by emailing tim.dechant@techcrunch.com .

View Bio

I just want working RCS messaging

Hacker News
wt.gd
2025-11-19 01:41:33
Comments...
Original Article

I just want working RCS messaging

I’m in over a month now with non-working RCS on my iPhone 15 Pro. Apple blames the carriers, the carriers tell me it’s not them (mostly T-Mobile since I have good contacts there). They tell me they can’t really do anything about iPhones not working on RCS, go back to Apple. This is what it looks like:

Waiting for Activation

In short, it’s probably Apple or Google and there’s zero accountability from Apple. I have AppleCare+ and really hoped they’d actually try to troubleshoot and fix this rather than waste my time working around it (in a stupidly expensive way for me and Apple).

My background #

I’m OS agnostic as much as possible, I daily both Android and iOS devices and previously used BlackBerry 10 and Harmattan (Nokia N9’s OS). If Windows Phone was still around I’d probably still be running that as well. If it’s possible to gather information on how all this works under the hood, I can and do. The OnePlus Android devices I’m running are my own LineageOS builds.

Previous history fixing MMS failures for Carriers/Vendors #

I’m also happy to blame carriers and vendors: I previously brought and helped resolve an issue with Verizon Wireless on LineageOS phones due to how MMS messaging works. Here’s my initial submission, their developer LuK found a better way to go about it, but it at least started the ball rolling: https://review.lineageos.org/c/LineageOS/android_device_oneplus_sm8250-common/+/333379

In short: When you received a picture message on Verizon in the past their network would notify your device that a new message arrived. When the device went to grab and download the image, it sends something similar to browser User Agent, called a UAProf. This is a link to a file that describes what the phone can handle, so a smartphone gets a high resolution image and a featurephone gets a lower resolution one. Verizon’s management sucks and decommissioned the domain that hosts all the UAProfs for their devices. Of note, Verizon is uniquely affected by this issue, T-Mobile doesn’t care what UAProf a device advertises, it’s not required on their network. I haven’t done enough testing with AT&T to answer whether it’s an issue for them.

MMS Failure Demonstrations #

This is a former link to a Verizon UAProf for a Samsung device: http://uaprof.vtext.com/sam/i515/i515.xml

Notice it doesn’t load? Apple/Blackberry and basically any non-Android manufacturers didn’t trust carriers to host these files. Some manager at Verizon decided to kill the vtext service and also fucked over any MMS users on their network not using an iPhone.

Here’s Apple’s: https://www.apple.com/mms/uaprof.rdf .
And here’s Blackberry’s: https://www.blackberry.com/go/mobile/profiles/uaprof/9700/5.0.0.rdf

I’m getting off-topic though, I just wanted to post some context that this is not my first rodeo with fixing these kinds of issues. Carriers are incompetent with this sort of interoperability and they gave up on running their own RCS servers to let Google do it through something called Google Jibe, I’ll talk about that soon.

Google breaking RCS on LineageOS #

Starting around the end of 2023, Google started to maliciously break RCS for custom Android OS’s. I say maliciously because it was a silent failure, RCS reported as working, but messages wouldn’t go through, and incoming messages would fail to receive. Google could have remained silent about it and rumors probably would have swirled: Perhaps it was a technical issue or the custom ROM developers’ faults?

No, Google intentionally broke it.

They straight up admitted to blocking it: https://www.androidauthority.com/google-silently-blocking-rcs-rooted-android-phones-custom-roms-3421652/ and it wasn’t until months later that they even showed a notification that it was disabled on affected devices. I really hope some lawyer or regulator reading this will get to extract their pound of blood because Google loves to boast about doing 911 over RCS: https://blog.google/products/messages/google-messages-rcs-911-emergency/

Eventually for my own devices I would spoof to the fingerprint of Google PIxel devices to be able to use RCS. It has mostly continued to work since then, but it begs the question: If I could reliably work around the blocking, then what excuse do you have about it being to prevent spam? Since those spammers will just use the same methods I’ve used, which are hardly secret. It just aims to hurt users that want some control of their device.

Apple launches RCS #

At some point Apple was dragged kicking and screaming into RCS interoperability. I actually have some sympathy here because MMS was really a terrible protocol that nobody should have adopted and Apple was dragged into supporting that years after the original iPhone launch in iOS 3. Regardless, with iOS 18, Apple brought in baseline RCS (version 2.4) support. It is hoped that they will update it sometime in the iOS 26 series to include E2E encryption.

My iPhone Background, Start of RCS Issues #

RCS always worked on my phone in iOS 18 until the past month when I upgraded to iOS 26. I should note that unlike Android, I do not modify iOS device in any way, basically I expect it should ‘just work’. The only unusual thing I run is Mullvad’s DNS to act as an adblocker, but so does my family and their iDevices don’t have RCS issues.

I am a dual-sim user on T-Mobile and US Mobile (usually on the AT&T network). With iOS 26 both lines have been stuck on “Waiting for activation…”. If I transfer the lines off to any other iPhone, the lines activate in seconds. I additionally took a Verizon Wireless line from my Mom’s 14 Pro Max and it also displayed the same issue. My girlfriend has a 14 Pro Max and a SE3, both can activate my RCS lines when I transfer them over.

Troubleshooting Steps I Did #

I’ve done an absolutely exhaustive level of testing to see if these lines would activate on my phone, there’s probably more than this but this is what I could think of:

  1. Rebooted/Toggled Airplane Mode/Toggled RCS
  2. Resetting Network Settings
  3. Removed all my VPN profiles and apps. (Mullvad/Orbot/Mullvad’s DNS profile/my server’s wireguard profile)
  4. Deactivated one of my lines and tried reactivating RCS.
  5. Disabling 5G and trying to activate RCS.
  6. Reissuing both eSIM’s from the carriers.
  7. Toggling iMessage.
  8. Resetting All settings 9 Resetting everything on device.
    • Restoring from iTunes backup
    • Restoring from iCloud backup (literally activated a trial to be able to do this)
    • Tested resetting with and without eSIM.
  9. Recovering device (recovery mode, setting up as new device)
    • Both with and without eSIM’s on device.
  10. Disabling RCS and waiting days before attempting to reactivate.
  11. Updating my e911 addresses, disabling/renabling wifi calling. Testing on Wifi.
  12. Reissuing just T-Mobile eSIM but to the other IMEI on the phone that it’s normally not on.
  13. Deleting the numbers out numerous times in Carrier settings (I have no idea what this does but it does make the signal reconnect).
  14. Testing sending messages from devices that work with RCS to this device in hopes it upgrades.
  15. Testing the iOS beta releases.
  16. I brought up the Gentoo Linux packages for libimobiledevice so I could run idevicesyslog and dump hundreds of megabytes of live logs in hopes of being able to see what the phone is failing on: (the packages) https://github.com/joecool1029/joecool-overlay/tree/master/app-pda
    • This is a small T-Mobile related excerpt of what looks like the problem could be. Specifically, UserInteractionRequired.xml. I don’t know what interaction is needed and why Apple’s software isn’t presenting more information, but this is the best I could do from digging through a ton of redacted logs: Nov 9 15:54:14.294398 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294406 CommCenter[101] <Notice>: #I --> switch: true, bundle_support: false, entitlement_support: true, enabled_by_default: true, disabled_by_profile: false, is_store_demo_device: false Nov 9 15:54:14.294415 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294424 CommCenter[101] <Notice>: #I --> encryption_supported: false, push_supported: false, push_enabled: false, private_relay_supported: false, msisdn_source: (empty) Nov 9 15:54:14.294432 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294440 CommCenter[101] <Notice>: #I --> Changed: (nothing) Nov 9 15:54:14.294448 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294455 CommCenter[101] <Notice>: #I Ims registration interface: kUnknown --> kCellular Nov 9 15:54:14.294463 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294471 CommCenter[101] <Notice>: #I Lazuli model not allowed: [provisioning style: kUsingToken, sms online: false, msisdn OK: true] Nov 9 15:54:14.294479 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294487 CommCenter[101] <Notice>: #I Provisioning not possible Nov 9 15:54:14.294494 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294505 CommCenter[101] <Notice>: #I Infinite validity of UserInteractionRequired.xml xml Nov 9 15:54:14.294514 CommCenter[101] <Notice>: #I [config.rcs.mnc260.mcc310.jibecloud.net] Declaring IMS not ready. Unexpired : UserInteractionRequired.xml Nov 9 15:54:14.294522 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294529 CommCenter[101] <Notice>: #I Nudge not required: Allowed Nov 9 15:54:14.294537 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294546 CommCenter[101] <Notice>: #I Evaluate recheckEntitlementForRCS. Ent:Allowed, Switch toggled:false, CB recheck:false Nov 9 15:54:14.294554 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294561 CommCenter[101] <Notice>: #I Entitlement result: [RCS support: kSupported, user eligibile: kEligible, token-support: true] Nov 9 15:54:14.294569 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294577 CommCenter[101] <Notice>: #I Evaluated provisioning style: kUsingToken Nov 9 15:54:14.294584 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294592 CommCenter[101] <Notice>: #I Retrieving feature switch state Nov 9 15:54:14.294600 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294608 CommCenter(CoreServices)[101] <Debug>: Starting database access (depth 0, options: 1) Nov 9 15:54:14.294616 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294624 CommCenter(CoreServices)[101] <Debug>: BindingEvaluator::CreateWithBundleInfo(ID=<private>, name=<private>, CC=????, vers=(null)) Nov 9 15:54:14.294633 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294641 CommCenter(CoreServices)[101] <Debug>: Truncating a list of bindings to max 1 known-good ones. Nov 9 15:54:14.294648 CommCenter[101] <Debug>: #D supportsHOVirtualInterfaces: ret = false Nov 9 15:54:14.294656 CommCenter(CoreServices)[101] <Debug>: Truncating a list of bindings to max 1 known-good ones.

So this last entry probably tells us where to look. The carrier (T-Mobile here) is provisioned for RCS, it’s receiving this interaction required file with infinite validity. So long as that’s in place, it fails to activate. (This is a guess, but it’s certainly more information than the KB articles give on Apple’s sites).

Apple does not provide their employees with correct information on troubleshooting this issue. They do not empower them to properly troubleshoot the issue. #

The standard instruction given to them is: “Do not take accountability, blame the carrier.”

So then I come in and say I have failures with all 3 major carriers and categorically refuse to accept that explanation, when I know my lines work just fine on other iDevices.

The Apple Store initially blamed software, this would be reasonable except we’ve reloaded the state of my phone 3 times now (once from iTunes, and twice now from iCloud, tomorrow will be the 4th time). I gave them permission to wipe any setting and recover the phone, but I go a step further and request they transfer my T-Mobile eSIM to another store device preferably in the 15 Pro line. They cannot do this because of user privacy reasons . This is a dealbreaker from troubleshooting, I am not made of money and I do not have any additional 15 pro devices to test with, it’s already crazy enough I have multiple carriers at the ready to test, 2 14 Pro Max’s and a SE3.

Google Jibe #

I think this is where we need information. As I said before, the carriers in the US gave up running their own RCS infrastructure and Apple’s employees aren’t really trained about this situation. With the exception of my own knowledge and the logs I pulled from the phone, Jibe was not mentioned once in the 3 phone calls and the multiple hours onsite in Apple Store today.

I have no business relationship with Google Jibe, and there’s no way for me to interact with or contact them. Their documentation is probably here but I can’t read it, since I’m not a carrier partner: https://docs.jibemobile.com/ Apple knows there’s a ‘carrier’ issue, but in reality, RCS is run through Google Jibe in the US and this was never once disclosed to me. I never brought it up until this blog post, I cannot go into a store and say “I have been using opensource tools to analyze the logs from this phone and think it’s a failure with Jibe” . Do you get how crazy this sounds?

What Apple’s Going To Do Tomorrow #

Since they hit a wall and I refuse to continue to entertain the “go bug T-Mobile/US Mobile” direction, Apple is swapping the board in my phone. Of course they didn’t have the parts in the store to do it, so I have to wait to drive back tomorrow for them to do it. This will have new IMEI numbers and given the experience I’ve had with these lines activating on 3 other iDevices, it should probably work. The only way it wouldn’t is if this was a generational issue, but they have not given me a way to test this. They adamantly tell me: “We are doing you the favor as a courtesy, we don’t believe this is our problem.” I know they are trained to say this but it’s terrible customer service. I shelled out for Applecare+, if it might be the phone just swap it and analyze it back at Apple HQ, I’ve done enough testing now to know it’s something with just this specific device. I referred people to use iPhones because in general they do not often have these issues and the customer support was good. The board swap solution they are offering only wastes my time/fuel and punts the problem down the road. Since we never actually looked at the logs I might hit it again, other users might be affected.

I’d rather Apple actually fix the problem #

I use opensource software not because it’s inherently better, but rather because I can at least triage, understand, and fix problems. Give me a radar Apple. I’m a rare dual-SIM user in the US with a Google Jibe RCS failure. Where did it fail? Dig into my logs and tell me: Is it because I hop between primary data carriers (because the whole reason I have dual-carrier is better service coverage). I don’t spend a lot of time on WiFI, I run my house on mobile carriers. The only thing I know is I didn’t change my configuration from iOS 18 to iOS 26, but things stopped working and there’s no way for me to downgrade to 18 because you stopped signing it!

12

Kudos

12

Kudos

Strace-macOS: A clone of the strace command for macOS

Hacker News
github.com
2025-11-19 01:18:02
Comments...
Original Article

strace-macos

A system call tracer for macOS using the LLDB debugger API.

Status : Beta - Core functionality works, but some features are still in development.

Features

  • Works with SIP enabled - Unlike dtruss , doesn't require disabling System Integrity Protection
  • Pure Python implementation - No kernel extensions or compiled components
  • Multiple output formats - JSON Lines and strace-compatible text output
  • Syscall filtering - Filter by syscall name or category ( -e trace=file , -e trace=network )
  • Symbolic decoding - Automatically decodes flags, error codes, and struct fields
  • Color output - Syntax highlighting when output is a TTY
  • Summary statistics - Time/call/error counts with -c

Installation

With Nix Flakes

# Run directly
nix run github:Mic92/strace-macos -- ls

# Install to profile
nix profile install github:Mic92/strace-macos

Manual Installation

strace-macos requires macOS system Python (has LLDB bindings):

# Install directly from GitHub
/usr/bin/python3 -m pip install --user git+https://github.com/Mic92/strace-macos

# Then run (if ~/Library/Python/3.x/bin is in PATH)
strace /usr/local/bin/git status  # or any homebrew-installed binary

# Or run directly from repository without installing
git clone https://github.com/Mic92/strace-macos
cd strace-macos
/usr/bin/python3 -m strace_macos /usr/local/bin/git status

Usage

Trace a command

# Basic usage (use non-system binaries like homebrew or nix-installed)
strace /usr/local/bin/git status

# Output to file
strace -o trace.txt /usr/local/bin/git status

# JSON output
strace --json /usr/local/bin/git status > trace.jsonl

# Filter syscalls by name
strace -e trace=open,close /usr/local/bin/git status

# Filter by category*
strace -e trace=file /usr/local/bin/git status    # All file operations
strace -e trace=network /usr/local/bin/curl https://example.com   # Network syscalls only
strace -e trace=process /usr/local/bin/git status # Process lifecycle syscalls

* See Syscall Filtering for all supported categories.

Attach to running process

Summary statistics

strace -c /usr/local/bin/git status
# % time     seconds  usecs/call     calls    errors syscall
# ------ ----------- ----------- --------- --------- ----------------
#  45.23    0.001234          12       103           read
#  32.10    0.000876           8       110           write
#  ...

Syscall Filtering

strace-macos supports filtering syscalls by name or category using the -e trace= option.

Filter by Syscall Name

Specify one or more syscall names separated by commas:

strace -e trace=open,close,read,write /usr/local/bin/git status

Filter by Category

Use predefined categories to trace groups of related syscalls:

Category Description Example Syscalls
file File operations open, close, read, write, stat, unlink
network Network operations socket, connect, send, recv, bind
process Process lifecycle fork, exec, wait, exit, kill
memory Memory management mmap, munmap, brk, mprotect
signal Signal handling signal, sigaction, sigprocmask, kill
ipc Inter-process communication pipe, shm_open, msgget, semop
thread Thread operations pthread_create, bsdthread_register
time Time and timers gettimeofday, setitimer, utimes
sysinfo System information sysctl, getpid, getuid, uname
security Security/MAC operations __mac_*, csops, csrctl
debug Debugging and tracing ptrace, kdebug_trace, panic_with_data
misc Miscellaneous syscalls ioctl, fcntl, kqueue, connectx

Example:

# Trace only file operations
strace -e trace=file /usr/local/bin/git status

# Trace only network syscalls
strace -e trace=network /usr/local/bin/curl https://example.com

# Trace process management syscalls
strace -e trace=process /usr/local/bin/git status

Comparison with Linux strace

Feature Linux strace strace-macos
Filter by syscall name -e trace=open,close -e trace=open,close
Filter by category -e trace=file -e trace=file
Negation ( ! ) -e trace=!open ❌ Not yet
Regex filtering -e trace=/^open/ ❌ Not yet
Path filtering -P /etc/passwd ❌ Not yet
FD filtering -e trace-fd=3 ❌ Not yet
%desc category ✅ FD-related syscalls ❌ Not yet
Percent prefix %file or file ⚠️ Only file

Requirements

  • macOS 12+ (Monterey or later)
  • Apple Silicon (ARM64) - primary platform
  • Intel (x86_64) - work in progress
  • Xcode Command Line Tools (for LLDB)
  • System Python ( /usr/bin/python3 )

Important : Must use macOS system Python - LLDB bindings don't work with Homebrew/pyenv/Nix Python.

Contributing

Contributions are welcome! See CONTRIBUTING.md for:

  • Development environment setup
  • Code style guidelines
  • Testing instructions
  • How to add new syscalls
  • Pull request process

Current Status : 3/13 tests passing (spawn functionality working)

Architecture

strace-macos (Python CLI)
    ↓
LLDB Python API
    ↓
debugserver (macOS debugging APIs)
    ↓
Target Process

The tracer uses LLDB's Python bindings to:

  1. Set breakpoints at syscall entry/exit points
  2. Read CPU registers to extract syscall arguments
  3. Decode arguments symbolically (flags, errno, structs)
  4. Format output in strace-compatible or JSON format

Implementation Status

Working :

  • Spawn and trace new processes ✅
  • Attach to running processes ✅
  • Basic syscall capture (entry/exit) ✅
  • Argument decoding (integers, strings, pointers, buffers, iovecs) ✅
  • Symbolic flag decoding (O_RDONLY, etc.) ✅
  • Error code decoding (ENOENT, etc.) ✅
  • Struct decoding (stat, sockaddr, msghdr, etc.) ✅
  • Syscall filtering by name and category ✅
  • Summary statistics ( -c ) ✅
  • JSON and text output formats ✅
  • Color output with syntax highlighting ✅

Planned :

  • Multi-threaded process support
  • Follow forks ( -f )
  • Negation filtering ( -e trace=!open )
  • Regex filtering ( -e trace=/^open/ )
  • Path-based filtering ( -P /path )
  • FD-based filtering ( -e trace-fd=3 )
  • String truncation control ( -s )
  • Relative/absolute timestamps ( -t , -tt , -ttt )

Why not dtruss?

macOS ships with dtruss , a DTrace-based syscall tracer. However:

  • Requires disabling System Integrity Protection (SIP)
  • Doesn't work on modern macOS versions without workarounds
  • Limited filtering capabilities
  • No symbolic decoding of arguments

strace-macos works with SIP enabled and provides richer output.

Comparison with Linux strace

strace-macos aims for compatibility with Linux strace where possible:

Feature Linux strace strace-macos
Basic tracing
Attach to PID
Syscall filtering*
Summary stats
Follow forks
Symbolic decoding
JSON output
Color output

* See Syscall Filtering for detailed feature comparison.

License

MIT License - see LICENSE file for details.

Author

Jörg Thalheim joerg@thalheim.io

Need commercial support or customization?

For commercial support, please contact Mic92 at joerg@thalheim.io or reach out to Numtide .

See Also

Labor Movement Cheers As House Members Force Vote To Restore Federal Workers’ Union Rights

Portside
portside.org
2025-11-19 00:57:49
Labor Movement Cheers As House Members Force Vote To Restore Federal Workers’ Union Rights Greg Tue, 11/18/2025 - 19:57 ...
Original Article

In March, government workers protest President Donald Trump's attack on the collective bargaining rights of federal employees. | American Federation of Government Employees

Two Republicans in the US House of Representatives on Monday added their names to a discharge petition that will now force a vote on legislation to restore the collective bargaining rights of hundreds of thousands of federal workers targeted by GOP President Donald Trump .

US Reps. Jared Golden (D-Maine) and Brian Fitzpatrick (R-Pa.) responded to Trump’s legally contentious executive order by introducing the Protect America’s Workforce Act in April. They began collecting petition signatures in June. At least 218 members had to sign it to override House Speaker Mike Johnson (R-La.) and force a vote on the bill.

Two New York Republicans, Congressmen Nick LaLota and Mike Lawler, signed the petition on Monday. It was previously signed by the sponsors, House Democrats , and GOP Reps. Rob Bresnahan (Pa.) and Don Bacon (Neb.). Their move came on the heels of an end to the longest government shutdown in US history, which left some federal workers furloughed and others working without pay.

“Every American deserves the right to have a voice in the workplace, including those who serve their country every single day. Supporting workers and ensuring good government are not opposing ideas,” Lawler said in a statement. “They go hand in hand. Restoring collective bargaining rights strengthens our federal workforce and helps deliver more effective, accountable service to the American people.”

“Speaker Johnson has run out of excuses to delay a vote on this legislation to restore federal workers’ rights.”

Golden, a former Blue Dog Coalition co-chair who recently announced his plans to retire from Congress after this term, thanked the newest signatories for joining the fight for his bill.

“America never voted to eliminate workers’ union rights, and the strong bipartisan support for my bill shows that Congress will not stand idly by while President Trump nullifies federal workers’ collective bargaining agreements and rolls back generations of labor law,” Golden said. “I’m grateful to Reps. LaLota and Lawler for bringing this discharge petition over the finish line, and I’m calling on Speaker Mike Johnson to schedule a clean, up-or-down vote on this bill.”

Liz Shuler, president of the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), the country’s largest federation of unions , similarly welcomed the latest signatures and set her sights on the House speaker.

“The labor movement fought back against the largest act of union-busting in American history by doing what we do best: organizing,” Shuler said in a Monday statement. “Working people built a bipartisan coalition to restore union rights to federal workers in the face of unprecedented attacks on our freedoms. We commend every Democrat and Republican who signed the discharge petition to bring the Protect America’s Workforce Act to a vote, but the fight isn’t over.”

“Speaker Johnson has run out of excuses to delay a vote on this legislation to restore federal workers’ rights,” she continued. “It’s time to bring the Protect America’s Workforce Act to a vote and restore federal workers’ right to collectively bargain and have a voice on the job.”

Everett Kelley, national president of the American Federation of Government Employees (AFGE)—which is the largest federal workers union, representing 820,000 people in the federal and District of Columbia governments—also applauded the development on Monday.

“An independent, apolitical civil service is one of the bedrocks of American democracy ,” Kelley said in a statement. “Today, lawmakers stood up together to defend that principle and to affirm that federal workers must retain their right to collective bargaining. This is what leadership looks like.”

“Federal workers do their jobs every day without regard to politics. Today’s action honors that commitment,” Kelley asserted.

“AFGE will continue fighting until these essential rights are fully restored, including by fighting to retain Section 1110 of the must-pass National Defense Authorization Act,” he vowed, referring to an amendment to the NDAA that restores bargaining rights to hundreds of thousands of civilians working in the US Department of Defense.

While discharge petitions are rarely successful, this one secured the necessary 218 signatures following a similar victory last week, when the newest member of Congress, Rep. Adelita Grijalva (D-Ariz.), signed her name to an effort to force a vote on releasing files related to deceased sex offender Jeffrey Epstein.

Jessica Corbett is a senior editor and staff writer for Common Dreams.

I made a down detector for down detector

Hacker News
downdetectorsdowndetector.com
2025-11-19 00:05:28
Comments...
Original Article

A tiny independent status checker.

Checks by region

Region Status HTTP Latency
Loading region checks…

Scientists Discover the Origin of Kissing — And It’s Not Human

403 Media
www.404media.co
2025-11-19 00:01:49
Kissing is ubiquitous among many animals, especially primates, suggesting deep evolutionary roots of the behavior....
Original Article

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Kissing is one of humanity’s most cherished rituals—just think of the sheer variety of smooches, from the “wedding kiss” to the “kiss of death.” Now, scientists have discovered that the origins of this behavior, which is widespread among many primates, likely dates back at least 21 million years, according to a study published on Tuesday in the journal Evolution and Human Behavior .

In other words, our early primate relatives were sitting in a tree, K-I-S-S-I-N-G, in the early Miocene period. Moreover, the deep evolutionary roots of kissing suggest that Neanderthals likely smooched each other, and probably our human ancestors as well. The new study is the first attempt to reconstruct the evolutionary timeline of kissing by analyzing a wealth of observations about this behavior in modern primates and other animals.

“It is kind of baffling to me that people haven't looked at this from an evolutionary perspective before,” said Matilda Brindle, an evolutionary biologist at the University of Oxford who led the study, in a call with 404 Media. “There have been some people who have put ideas out there, but no one's done it in a systematic way.”

“Kissing doesn't occur in all human cultures, but in those that it does, it's really important,” she added. “That's why we thought it was really exciting to study.”

A collage of mouth-to-mouth contact across species. Image: Brindle, Matilda et al.

The ritual of the “first kiss” is a common romantic trope, but tracking down the “first kiss” in an evolutionary sense is no easy feat. For starters, the adaptive benefits of kissing have long eluded researchers. Mouth-to-mouth contact raises the odds of oral disease transfer, and it’s not at all clear what advantages puckering up confers to make it worth the trouble.

“Kissing is kind of risky,” Brindle said. “You're getting very close to another animal's face. There could be diseases. To me, that suggests that it is important. There must be some benefits to this behavior.”

Some common explanations for sex-related kissing include mate evaluation—bad breath or other red flags during a smoochfest might affect the decision to move on to copulation. Kissing may also stimulate sexual receptiveness and perhaps boost the odds of fertilization. In platonic contexts, kissing could serve a social purpose, similar to grooming, of solidifying bonds between parents and offspring, or even to smooth over conflicts between group members.

“We know that chimpanzees, when they've had a bit of a bust up, will often go and kiss each other and make up,” Brindle said. “That might be really useful for navigating social relationships. Primates are obviously an incredibly social group of animals, and so this could be just a social lubricant for them.”

Though most of us have probably never considered the question, Brindle and her colleagues first had to ask: what is a kiss? They made a point to exclude forms of oral contact that don’t fall into the traditional idea of kissing as a prosocial behavior. For example, lots of animals share food directly through mouth-to-mouth contact, such as regurgitation from a parent to offspring. In addition, some animals display antagonistic behavior through mouth-to-mouth contact, such as “kiss-fighting” behavior seen in some fish.

The team ultimately defined kissing as “a non-agonistic interaction involving directed, intraspecific, oral-oral contact with some movement of the lips/mouthparts and no food transfer.” Many animals engage in kissing under these terms—from insects, to birds, to mammals—but the researchers were most interested in primates.

To that end, they gathered observations of kissing across primate species and fed the data into models that analyzed the timeline of the behavior through the evolutionary relationships between species. The basic idea is that if humans, bonobos, and chimpanzees all kiss (which they do) then the common ancestor of these species likely kissed as well.

The results revealed that the evolutionary “first kiss” likely occurred among primates at least 21 million years ago. Since Neanderthals and our own species, Homo sapiens , are known to have interbred—plus they also shared oral microbes—the team speculates that Neanderthals and our own human ancestors might have kissed as well.

While the study provides a foundation for the origins of kissing, Brindle said there is not yet enough empirical data to test out different hypotheses about its benefits—or to explain why it is important in some species and cultures, but not others. To that end, she hopes other scientists will be inspired to report more observations about kissing in wild and captive animal populations.

“I was actually surprised that there were so few data out there,” Brindle said. “I thought that this would be way better documented when I started this study. What I would really love is, for people who see this behavior, to note it down, report it, so that we can actually start collecting more contextual information: Is this a romantic or a platonic kiss? Who were the actors in it? Was it an adult male and an adult female, or a mother and offspring? Were they eating at the time? Was there copulation before or after the kiss?”

“These sorts of questions will enable us to pick apart these potential adaptive hypotheses,” she concluded.

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Lucent 7 R/E 5ESS Telephone Switch Rescue

Hacker News
kev009.com
2025-11-18 23:59:14
Comments...
Original Article

I am still recovering from the fairly challenging logistical project of saving a Lucent 5ESS. This is a whale of a project and I am still in a state of disbelief that I have gotten to this point. Thanks to my wife, brother, and a few friends for their help and the University of Arizona which has a very dedicated and professional Information Technology Services staff.

5ESS Rows

It started when I saw some telephone history enthusiasts post about a construction bid at the University of Arizona. It turns out, U of A installed the 5ESS in the late 1980s in a rather forward thinking move that netted a phone system that handled the growth of the University, medium speed data anywhere a phone may be located (ISDN BRI or PRI), and copper and fiber plant that will continue to be used indefinitely.

At peak, it served over 20,000 lines. They've done their own writeup, The End of An Era in Telecommunications , that is worth a read. In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration that includes an optical packet-switched core called the Communications Module 3 (CM3) or Global Messaging Server 3 (GMS3).

5ESS diagram

Moving 40 frames of equipment, this required a ton of planning and muscle. The whole package took up two 26' straight-trucks, which is just 1' short of an entire standard US semi-trailer.

Coming from the computing and data networking world, the construction of the switch was quite bewildering at first. It is physically made up of standard frames which are interconnected into rows not unlike datacenter equipment, but the frames are integrated into an overhead system for cable management. Internally, they are wired up usually within the row and quite a few cables route horizontally beween frames, but some connections have to transit up and over to other rows.

Line Trunk Peripherals hook up to a Switching Module Controller (SMC) directly or an OXU (Optical Cross Connect Unit) which hooks up to an SMC and reduces the amount of copper cabling going between rows. Alarm cables run directly to an OAU (Office Alarm Unit) or form rings in between rows that eventually end at the OAU. Optical connections go from OXUs to SMCs and then to the CM, copper test circuits home run to a Metallic Test Service Unit shelf. Communications Cables come out the top and route toward the wire frame, usually in large 128 wire cables but occasionally in smaller quantity for direct or cross connect of special services. A pair of Power Distribution Frames distribute -48V throughout the entire footprint, taking into account redundancy at every level.

All of this was neatly cable laced with wax string. Moving a single frame required hundreds of distinct actions that vary from quick, like cutting cable lace, to time consuming removal of copper connections and bolts in all directions.

5ESS move

We were able to complete the removal in a single five day workweek, and I was able to unload it to my receiving area in two days over the weekend where it now safely resides.

The next step will be to acquire some AC and DC power distribution equipment, which will have to wait for my funds to recover.

I should be able to boot the Administrative Module (AM), a 3B21D computer, up relatively soon by acquiring a smaller DC rectifier and that alone will be very interesting as it is the only use I know of the DMERT or UNIX-RTR operating system, a fault tolerant micro-kernel realtime UNIX from Bell Labs.

3B21D

The system came with a full set of manuals and schematics which will help greatly in rewiring and reconfiguring the machine. After the AM is up, I need to "de-grow" the disconnected equipment and I will eventually add back in an assortment of line, packet, and service units so that I can demonstrate POTS as well as ISDN voice and data. In particular, I am looking forward to interoperating with other communication and computing equipment I have.

I will have to reduce the size of the system quite a bit for power and space reasons so will have spare parts to sell or trade.

Additional Pictures are available here until I have a longer term project page established.

This is too much machine for one man, and it is part of a broader project I am working on to build a computing and telecommunications museum. If you are interested in working on the system with me, please feel free to reach out.

5ESS receiving


Why MAGA Is Obsessed With Epstein − and Why the Files Are Unlikely To Dent Loyalty to Trump

Portside
portside.org
2025-11-18 23:58:51
Why MAGA Is Obsessed With Epstein − and Why the Files Are Unlikely To Dent Loyalty to Trump Judy Tue, 11/18/2025 - 18:58 ...
Original Article

MAGA red hats are lined up on a cloth-covered table. MAGA hats are placed on a table at an election night party in West Palm Beach, Fla., on Nov. 5, 2024. | Ricky Carioti/The Washington Post via Getty Images

With the latest shift by President Donald Trump on releasing the Epstein files held by the U.S. Department of Justice – he’s now for it after being against it after being for it – the MAGA base may finally get to view the documents it’s long wanted to see. On the afternoon of Nov. 18, 2025, the House voted overwhelmingly to seek release of the files, with only one Republican voting against the measure . The Conversation’s politics editor, Naomi Schalit, talked with scholar Alex Hinton, who has studied MAGA for years , about Make America Great Again Republicans’ sustained interest in the case of accused child sex trafficker Jeffrey Epstein. Hinton explains how MAGA’s interest in the case fits into what he knows about the group of die-hard Trump supporters.

Naomi Schalit: You are an expert on MAGA. How do you learn what you know about MAGA?

Alex Hinton: I’m a cultural anthropologist , and what we do is field work. We go where the people we’re studying live, act, talk. We observe and sort of hang out and see what happens. We listen and then we unpack themes. We try and understand the meaning systems that undergird whatever group we’re studying. And then, of course, there’s interviewing.

A man in a suit with a crowd behind him stands at a microphone-covered lectern that has a sign 'EPSTEIN FILES TRANSPARENCY ACT' written on it.

U.S. Rep. Thomas Massie, a Texas Republican, speaks at a press conference alongside alleged victims of Jeffrey Epstein at the U.S. Capitol on Sept. 3, 2025. Bryan Dozier/Middle East Images via AFP, Getty Images

It appears that MAGA, Trump’s core supporters, are very concerned about various aspects of the Epstein story , including the release of documents that are in the possession of the U.S. government. Are they, in fact, concerned about this?

The answer is yes, but there’s also a sort of “no” implicit, too. We need to back up and think, first of all, what is MAGA.

I think of it as what we call in anthropology a nativist movement, a foregrounding of the people in the land. And this is where you get America First discourse . It’s also xenophobic, meaning that there’s a fear of outsiders, invaders coming in. It’s populist, so it’s something that’s sort of for the people.

Tucker Carlson interviewed Marjorie Taylor Greene , and he said, “I’m going to go over the five pillars of MAGA.” Those were America First, this is absolutely central. Borders was the second. You’ve got to secure the borders. The third was globalist antipathy, or a recognition that globalization has failed. Another one was free speech, and another one he mentioned was no more foreign wars. And I would add into that an emphasis on “we the people” versus elites.

Each of those is interwoven with a key dynamic to MAGA, which is conspiracy theory . And those conspiracy theories are usually anti-elite, going back to we the people.

If you look at Epstein, he’s where many of the conspiracy theories converge : Stop the Steal , The Big Lie , lawfare , deep state , replacement theory . Epstein kind of hits all of these, that there’s this elite cabal that’s orchestrating things that ultimately are against the interests of we the people, with a sort of antisemitic strain to this. And in particular, if we go back to Pizzagate in 2016 , this conspiracy theory that there were these Democratic elitists who were, you know, demonic forces who were sex trafficking, and lo and behold, here’s Epstein doing precisely that.

There’s kind of a bucket of these things, and Epstein is more in it than not in it?

He’s all over it. He’s been there, you know, from the beginning, because he’s elite and they believe he’s doing sex trafficking. And then there’s a suspicion of the deep state, of the government, and this means cover-ups. What was MAGA promised? Trump said, we’re going to give you the goods , right? Kash Patel, Pam Bondi, everyone said we’re going to tell you this stuff. And it sure smacks of a cover-up, if you just look at it.

But the bottom line is there’s a realization among many people in MAGA that you’ve got to stay with Trump. It’s too much to say there is no MAGA without Trump. There’s certainly no Trumpism without Trump, but MAGA without Trump would be like the tea party. It’ll just sort of fade away without Trump.

People in MAGA are supporting Trump more than more mainstream Republicans on this. So I don’t think there’s going to be a break over this, but it certainly adds strain. And you can see in the current moment that Trump is under some strain.

A blond woman in a red hat speaks at a microphone while a man in a suit stands behind her, with American flags behind him.

President Donald Trump and U.S. Rep. Marjorie Taylor Greene, a longtime supporter, have split over the Epstein files release. Elijah Nouvelage/AFP Getty Images

The break that we are seeing is Trump breaking with one of his leading MAGA supporters, Marjorie Taylor Greene, not the MAGA supporter breaking with Trump.

With Greene, sometimes it’s like a yo-yo in a relationship with Trump. You fall apart, you have tension, and then you sort of get back. Elon Musk was a little bit like that . You have this breakup, and now she’s sort of backtracking like Elon Musk did. I don’t think what is happening is indicative of a larger fracturing that’s going to take place with MAGA.

It seems that Trump did his about-face on releasing the documents so that MAGA doesn’t have to break with him.

It’s absolutely true. He’s incredible at taking any story and turning it in his direction. He’s sort of like a chess player, unless he blurts something out. He’s a couple of moves ahead of wherever, whatever’s running, and so in a way we’re always behind, and he knows where we are. It’s incredible that he’s able to do this.

There’s one other thing about MAGA. I think of it as “don’t cross the boss.” It’s this sort of overzealous love of Trump that has to be expressed, and literally no one ever crosses the boss in these contexts. You toe the line, and if you go against the line, you know what happened to Marjorie Taylor Greene, there’s the threat Trump is going to disown you. You’re going to get primaried .

Trump has probably made a brilliant strategic move, which is suddenly to say, “I’m all for releasing it. It’s actually the Democrats who are these evil elites, and now we’re going to investigate Bill Clinton and all these other Democrats.” He takes over the narrative, he knows how to do it, and it’s intentional. Whoever says Trump is not charismatic, he doesn’t make sense – Trump is highly charismatic. He can move a crowd. He knows what he’s doing. Never underestimate him.

Does MAGA care about girls who were sexually abused?

There is concern, you know, especially among the devout Christians in MAGA, for whom sex trafficking is a huge issue.

I think if you look at sort of notions of Christian morality, it also goes to notions of sort of innocence, being afflicted by demonic forces. And it’s an attack on we the people by those elites; it’s a violation of rights. I mean, who isn’t horrified by the idea of sex trafficking? But again, especially in the Christian circles, this is a huge issue.

===

Alex Hinton

Distinguished Professor of Anthropology; Director, Center for the Study of Genocide and Human Rights, Rutgers University - Newark

Food Assistance for All

Portside
portside.org
2025-11-18 23:40:45
Food Assistance for All Judy Tue, 11/18/2025 - 18:40 ...
Original Article

Economic insecurity is rampant and devastating. In Canada, a minimum-wage worker working full-time cannot afford a one-bedroom apartment in fifty-three of the country’s sixty-two urban regions. In the UK, 74 percent of parents find it difficult to meet childcare costs , and 10 percent are food insecure . In the United States, things are even worse: 770,000 people experience homelessness every night, and 40 percent of households are only three paychecks away from falling below the poverty line.

Not only is economic insecurity devastating to people’s lives — it is also a major force spurring the expansion of the far right , which has grown in leaps and bounds from a vote share of 3 percent across Europe in 2004, to over 25 percent today — higher rates than any time since the 1930s. The trajectory is clear and terrifying.

A universal basic income (UBI) is frequently proposed as a powerful antidote. Unfortunately, it remains quite unpopular with regular citizens on the (very questionable) grounds that it would disincentivize work, or be spent on drugs and alcohol. In the United States, support for a UBI ranges from about 38 percent to 45 percent of the population, and support is only slightly higher in Canada and the UK. Moreover, these survey results probably overestimate the true level of support for a UBI, because asking a person if they “support” something in the abstract doesn’t necessarily mean that, when push comes to shove, they would actually vote for it.

Only one country to date has come close to implementing a full-scale UBI. On June 5, 2016, the Swiss had a referendum on whether to adopt a scheme that would provide a monthly cash payment of around €2,330 per month, to all Swiss adult residents, without any means test or work requirement. Prior to the vote, the most authoritative survey found that 36 percent of Swiss respondents said they theoretically “supported” UBI. Yet when it came down to it, only 23 percent of people actually voted yes,  while 77 percent voted no — a resounding defeat.

No country anywhere in the world has implemented UBI. Only one country has had a national referendum on its introduction (which was roundly rejected), and no major political party anywhere officially endorses it — the largest parties that do are the Green League in Finland, which receives about 7 percent of the vote, Podemos in Spain at about 7 percent, and the Green Party of England and Wales, which receives about 3 percent.

An Alternative to UBI?

Over the long term, progressives can and should try to slowly shift the culture to persuade people that UBI is both ethically desirable and practically feasible. But we can’t wait to address economic insecurity. What we need is a UBI-like policy that successfully enhances economic security without putting the majority of voters’ noses too far out of joint.

Here is a realistic solution: free groceries for all.

All citizens would be provided with a relatively small amount of money unconditionally, say, $50 per month ($600 per year) for adults and $25 per month ($300 per year) for children. The money would be automatically transferred to a Grocery Electronic Card registered to each adult or parent (as is the practice for food stamps in the United States). These cards could only be used in registered establishments for the purchase of groceries and nothing else.

If enacted in the United States, the total cost of the program would be approximately $177 billion, or 0.6 percent of GDP. (In Canada, CAD$700 of free groceries would cost CAD$24 billion, or 1 percent of GDP, and in the UK it would cost £24 billion, or 0.9 percent of GDP, for £400 of groceries).

These are real and significant costs, no doubt about it, but they are not extreme. $177 billion is about 15 percent of what the US government currently spends on the military .

A program of this size could be paid for by raising taxes on the total income of the richest 10 percent of Americans by approximately 2.86 percent. Or alternatively, it could be paid for by instituting a wealth tax on the richest 1 percent at a rate of 0.41 percent (slightly more given the likelihood of some tax evasion). Since rich families would pay more in taxes than they would receive in free groceries, overall the program would act to redistribute wealth from the rich to the poor.

Free groceries for all avoids the major objections that are often raised against UBI: the money cannot be spent on drugs or alcohol; hardly anyone would quit their job for this amount of money; and conservative politicians would find it extremely difficult to whip up moral outrage about $600 for free food, especially for children.

Most important, the survey data shows clearly that regular people here and now would be keen on this type of program.

The Populus survey on UBI is the closest example that I am aware of directly asking people to state their preferences for free money versus free food. The survey asked 2,070 British adults their opinion on the following statement: “Rather than cash, the state should provide citizens with basic food supplies and social housing to meet their needs.” In response, 43 percent agreed , and 27 percent disagreed (20 percent were neutral, and 9 percent didn’t know).

Additional evidence comes from the fact that although many Americans are staunchly opposed to “welfare,” they tend to be much more supportive of the Supplemental Nutrition Assistance Program (SNAP, colloquially known as “food stamps”), which provides poor people with money to buy food, and only food. While large numbers of Americans believe that “government aid to the poor does more harm than good by making people too dependent on government,” surveys simultaneously find that 61 percent of people oppose reducing funding for SNAP .

Another study found that while 54 percent of Americans believe that too many people are dependent on the government for financial aid, only 36 percent are critical of food stamps . Interestingly, when Americans are presented with the objective facts of how much money for groceries SNAP recipients truly receive (an average of about $5.70 per family member per day), 66 percent of registered voters say that it should be increased , and only 4 percent say it should be decreased.

Strikingly, even 53 percent of Republicans believe that SNAP benefits should be increased. There is thus a broad bipartisan appeal to free food for the poor, even in a highly polarized political climate. (The Trump administration recently suspended payment of SNAP benefits for over two weeks, provoking an angry backlash ).

This belief in the importance of food security for all is widely shared. In Canada, a recent survey found that 85 percent agreed that the government should ensure that no child in Canada goes hungry, and 82 percent agreed with the statement that “people going hungry in Canada goes against our values .”

Taken together, this evidence suggests that many people have very different moral intuitions about the government providing free cash versus free food . Whereas free cash is widely perceived as an undeserved luxury, free food is not; it is more often seen as a basic human right. Providing free cash strikes many as morally questionable, but providing free food is just basic human decency. The upshot is that free groceries has potential for wide cross-party appeal in a way that a conventional UBI does not.

Free Groceries for All

We know that a policy like this is perfectly technically feasible, since there is already a working example in the SNAP program. Originally set up in 1974, SNAP is now a tried-and-true program, benefiting 41.2 million people via an Electronic Benefits Transfer (EBT) card. Most groceries are eligible, but alcohol, tobacco, gasoline, and other such things are not. Grocery stores must register with the government before they can accept EBT payment, but it is quick and easy to do so (the process is free and online), and almost all are registered, including most convenience stores and all the major chains.

SNAP proves the viability of a policy like free groceries for all. We know how to provide people with electronic cards; we know how to register grocery stores; most important, we know that when people get extra money for food, it helps their lives and improves their health .

Of course, $600 will not dramatically change anyone’s life. Nevertheless, millions and millions of people will receive it, appreciating the help from the government and their neighbors. Life will become a bit easier and a little less scary. And when life feels safer, the trans person or immigrant down the way will also seem less threatening. A large body of research bears this out: when economic insecurity worsens, support for right-wing populism typically increases , and when economic security improves, support for right-wing populism tends to decrease.

One final, vital point: studying the evidence of welfare states over the last century, social scientists have discovered that once universal entitlements are established, they typically grow over time and are only very rarely abolished. Sociologists refer to this phenomenon as the “stickiness of universal programs.”

A striking example is the case of pensions (known as Social Security in the United States). Once reviled as a socialist abomination in the 1930s, they are now extremely popular and taken for granted as a basic American right. Their stickiness is such that even after forty years of neoliberalism , conservatives have had very little success in reducing pension generosity. Of course, this is not an unbreakable law of nature. But it is an important tendency: once democratic majorities achieve universal policies that benefit them, and once they become accustomed to them, they will be highly resistant to losing them.

Getting a universal policy on the books in the first place is the most difficult part. But if this can be accomplished, then there are good reasons for thinking that, over time, the populace will enjoy the benefit, take it for granted, push for it to be expanded, and refuse to tolerate reductions.

I first started thinking about free groceries for all for my home context of Canada, but the policy would work well for many countries. The United States is the one place where things are trickier, because a different kind of program already exists in the form of SNAP (SNAP provides more money than the policy discussed here, but to only a small fraction of the population). The fact that SNAP recipients are a relatively small pool of poor and disproportionately non-white people means that SNAP is stigmatized and lacks the political muscle to fight for its expansion in the way that a more universal program would. So in the US case, the goal should be to slowly expand SNAP , making it more and more universal.

At the end of the day, $600 of free groceries per year is not enough to remedy the titanic insecurity that currently exists. But as a first step, it has a lot of potential. The social scientific evidence regarding the tendency of existing universal social programs to ratchet upward implies that it is strategically smart for the Left to start small in order to get such a program on the books in the first place. As a universal policy, truly huge numbers of people would receive the benefit — roughly 330 million Americans, 38 million Canadians, or 68 million British citizens. These enormous numbers mean that once instituted, free groceries for all would quickly become normal, familiar, and completely unstigmatized .

And once it’s seen as normal, it will quickly become normative: seen as good and natural, with the result that any attempt to remove it will be fiercely resisted. Once millions of people are tangibly enjoying the benefit, it is likely that it will expand over time. In this way, free groceries for all may well serve as a practical stepping stone toward the achievement of a more robust and radical UBI down the road.

===

Tee Malleson is an associate professor of social justice and peace studies at King’s University College at Western University, Canada. Their recent books include Against Inequality: The Practical and Ethical Case for Abolishing the Superrich .

The Talk Show: ‘Knee-Jerk Contrarian’

Daring Fireball
daringfireball.net
2025-11-18 23:34:58
Special guest Dan Frommer returns to the show. Topics include the indie media business, the iPhone Pocket, the iPhone Air (including rumors about the second generation model), AI “personalities”, and five years of Apple Silicon Macs. Also, six years of Dan’s site, The New Consumer. Sponsored by: ...
Original Article

The Talk Show

‘Knee-Jerk Contrarian’, With Dan Frommer

Special guest Dan Frommer returns to the show. Topics include the indie media business, the iPhone Pocket, the iPhone Air (including rumors about the second generation model), AI “personalities”, and five years of Apple Silicon Macs. Also, six years of Dan’s site, The New Consumer .

Sponsored by:

  • Factor : Healthy eating, made easy. Get 50% off your first box, plus free breakfast for 1 year, with code talkshow50off .
  • Notion : The AI workspace where teams and AI agents get more done together.
  • Squarespace : Save 10% off your first purchase of a website or domain using code talkshow .

Transcripts: Unofficial but surprisingly good .

Links:

This episode of The Talk Show was edited by Caleb Sexton.

Cloudflare outage on November 18, 2025 post mortem

Hacker News
blog.cloudflare.com
2025-11-18 23:31:22
Comments...
Original Article

2025-11-18

12 min read

On 18 November 2025 at 11:20 UTC (all times in this blog are UTC), Cloudflare's network began experiencing significant failures to deliver core network traffic. This showed up to Internet users trying to access our customers' sites as an error page indicating a failure within Cloudflare's network.

HTTP error page displayed during the incident

The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind. Instead, it was triggered by a change to one of our database systems' permissions which caused the database to output multiple entries into a “feature file” used by our Bot Management system. That feature file, in turn, doubled in size. The larger-than-expected feature file was then propagated to all the machines that make up our network.

The software running on these machines to route traffic across our network reads this feature file to keep our Bot Management system up to date with ever changing threats. The software had a limit on the size of the feature file that was below its doubled size. That caused the software to fail.

After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file. Core traffic was largely flowing as normal by 14:30. We worked over the next few hours to mitigate increased load on various parts of our network as traffic rushed back online. As of 17:06 all systems at Cloudflare were functioning as normal.

We are sorry for the impact to our customers and to the Internet in general. Given Cloudflare's importance in the Internet ecosystem any outage of any of our systems is unacceptable. That there was a period of time where our network was not able to route traffic is deeply painful to every member of our team. We know we let you down today.

This post is an in-depth recount of exactly what happened and what systems and processes failed. It is also the beginning, though not the end, of what we plan to do in order to make sure an outage like this will not happen again.

The outage

The chart below shows the volume of 5xx error HTTP status codes served by the Cloudflare network. Normally this should be very low, and it was right up until the start of the outage.

Volume of HTTP 5xx requests served by the Cloudflare network

The volume prior to 11:20 is the expected baseline of 5xx errors observed across our network. The spike, and subsequent fluctuations, show our system failing due to loading the incorrect feature file. What’s notable is that our system would then recover for a period. This was very unusual behavior for an internal error.

The explanation was that the file was being generated every five minutes by a query running on a ClickHouse database cluster, which was being gradually updated to improve permissions management. Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.

This fluctuation made it unclear what was happening as the entire system would recover and then fail again as sometimes good, sometimes bad configuration files were distributed to our network. Initially, this led us to believe this might be caused by an attack. Eventually, every ClickHouse node was generating the bad configuration file and the fluctuation stabilized in the failing state.

Errors continued until the underlying issue was identified and resolved starting at 14:30. We solved the problem by stopping the generation and propagation of the bad feature file and manually inserting a known good file into the feature file distribution queue. And then forcing a restart of our core proxy.

The remaining long tail in the chart above is our team restarting remaining services that had entered a bad state, with 5xx error code volume returning to normal at 17:06.

The following services were impacted:

Service / Product

Impact description

Core CDN and security services

HTTP 5xx status codes. The screenshot at the top of this post shows a typical error page delivered to end users.

Turnstile

Turnstile failed to load.

Workers KV

Workers KV returned a significantly elevated level of HTTP 5xx errors as requests to KV’s “front end” gateway failed due to the core proxy failing.

Dashboard

While the dashboard was mostly operational, most users were unable to log in due to Turnstile being unavailable on the login page.

Email Security

While email processing and delivery were unaffected, we observed a temporary loss of access to an IP reputation source which reduced spam-detection accuracy and prevented some new-domain-age detections from triggering, with no critical customer impact observed. We also saw failures in some Auto Move actions; all affected messages have been reviewed and remediated.

Access

Authentication failures were widespread for most users, beginning at the start of the incident and continuing until the rollback was initiated at 13:05. Any existing Access sessions were unaffected.

All failed authentication attempts resulted in an error page, meaning none of these users ever reached the target application while authentication was failing. Successful logins during this period were correctly logged during this incident.

Any Access configuration updates attempted at that time would have either failed outright or propagated very slowly. All configuration updates are now recovered.

As well as returning HTTP 5xx errors, we observed significant increases in latency of responses from our CDN during the impact period. This was due to large amounts of CPU being consumed by our debugging and observability systems, which automatically enhance uncaught errors with additional debugging information.

How Cloudflare processes requests, and how this went wrong today

Every request to Cloudflare takes a well-defined path through our network. It could be from a browser loading a webpage, a mobile app calling an API, or automated traffic from another service. These requests first terminate at our HTTP and TLS layer, then flow into our core proxy system (which we call FL for “Frontline”), and finally through Pingora, which performs cache lookups or fetches data from the origin if needed.

We previously shared more detail about how the core proxy works here .

Diagram of our reverse proxy architecture

As a request transits the core proxy, we run the various security and performance products available in our network. The proxy applies each customer’s unique configuration and settings, from enforcing WAF rules and DDoS protection to routing traffic to the Developer Platform and R2. It accomplishes this through a set of domain-specific modules that apply the configuration and policy rules to traffic transiting our proxy.

One of those modules, Bot Management, was the source of today’s outage.

Cloudflare’s Bot Management includes, among other systems, a machine learning model that we use to generate bot scores for every request traversing our network. Our customers use bot scores to control which bots are allowed to access their sites — or not.

The model takes as input a “feature” configuration file. A feature, in this context, is an individual trait used by the machine learning model to make a prediction about whether the request was automated or not. The feature configuration file is a collection of individual features.

This feature file is refreshed every few minutes and published to our entire network and allows us to react to variations in traffic flows across the Internet. It allows us to react to new types of bots and new bot attacks. So it’s critical that it is rolled out frequently and rapidly as bad actors change their tactics quickly.

A change in our underlying ClickHouse query behaviour (explained below) that generates this file caused it to have a large number of duplicate “feature” rows. This changed the size of the previously fixed-size feature configuration file, causing the bots module to trigger an error.

As a result, HTTP 5xx error codes were returned by the core proxy system that handles traffic processing for our customers, for any traffic that depended on the bots module. This also affected Workers KV and Access, which rely on the core proxy.

Unrelated to this incident, we were and are currently migrating our customer traffic to a new version of our proxy service, internally known as FL2 . Both versions were affected by the issue, although the impact observed was different.

Customers deployed on the new FL2 proxy engine, observed HTTP 5xx errors. Customers on our old proxy engine, known as FL, did not see errors, but bot scores were not generated correctly, resulting in all traffic receiving a bot score of zero. Customers that had rules deployed to block bots would have seen large numbers of false positives. Customers who were not using our bot score in their rules did not see any impact.

Throwing us off and making us believe this might have been an attack was another apparent symptom we observed: Cloudflare’s status page went down. The status page is hosted completely off Cloudflare’s infrastructure with no dependencies on Cloudflare. While it turned out to be a coincidence, it led some of the team diagnosing the issue to believe that an attacker may be targeting both our systems as well as our status page. Visitors to the status page at that time were greeted by an error message:

Error on the Cloudflare status page

In the internal incident chat room, we were concerned that this might be the continuation of the recent spate of high volume Aisuru DDoS attacks :

Internal chat screenshot

The query behaviour change

I mentioned above that a change in the underlying query behaviour resulted in the feature file containing a large number of duplicate rows. The database system in question uses ClickHouse’s software.

For context, it’s helpful to know how ClickHouse distributed queries work. A ClickHouse cluster consists of many shards. To query data from all shards, we have so-called distributed tables (powered by the table engine Distributed ) in a database called default . The Distributed engine queries underlying tables in a database r0 . The underlying tables are where data is stored on each shard of a ClickHouse cluster.

Queries to the distributed tables run through a shared system account. As part of efforts to improve our distributed queries security and reliability, there’s work being done to make them run under the initial user accounts instead.

Before today, ClickHouse users would only see the tables in the default database when querying table metadata from ClickHouse system tables such as system.tables or system.columns .

Since users already have implicit access to underlying tables in r0 , we made a change at 11:05 to make this access explicit, so that users can see the metadata of these tables as well. By making sure that all distributed subqueries can run under the initial user, query limits and access grants can be evaluated in a more fine-grained manner, avoiding one bad subquery from a user affecting others.

The change explained above resulted in all users accessing accurate metadata about tables they have access to. Unfortunately, there were assumptions made in the past, that the list of columns returned by a query like this would only include the “ default ” database:

SELECT name, type FROM system.columns WHERE table = 'http_requests_features' order by name;

Note how the query does not filter for the database name. With us gradually rolling out the explicit grants to users of a given ClickHouse cluster, after the change at 11:05 the query above started returning “duplicates” of columns because those were for underlying tables stored in the r0 database.

This, unfortunately, was the type of query that was performed by the Bot Management feature file generation logic to construct each input “feature” for the file mentioned at the beginning of this section.

The query above would return a table of columns like the one displayed (simplified example):

Example of code block

However, as part of the additional permissions that were granted to the user, the response now contained all the metadata of the r0 schema effectively more than doubling the rows in the response ultimately affecting the number of rows (i.e. features) in the final file output.

Memory preallocation

Each module running on our proxy service has a number of limits in place to avoid unbounded memory consumption and to preallocate memory as a performance optimization. In this specific instance, the Bot Management system has a limit on the number of machine learning features that can be used at runtime. Currently that limit is set to 200, well above our current use of ~60 features. Again, the limit exists because for performance reasons we preallocate memory for the features.

When the bad file with more than 200 features was propagated to our servers, this limit was hit — resulting in the system panicking. The FL2 Rust code that makes the check and was the source of the unhandled error is shown below:

code that generated the error

This resulted in the following panic which in turn resulted in a 5xx error:

thread fl2_worker_thread panicked: called Result::unwrap() on an Err value

Other impact during the incident

Other systems that rely on our core proxy were impacted during the incident. This included Workers KV and Cloudflare Access. The team was able to reduce the impact to these systems at 13:04, when a patch was made to Workers KV to bypass the core proxy. Subsequently, all downstream systems that rely on Workers KV (such as Access itself) observed a reduced error rate.

The Cloudflare Dashboard was also impacted due to both Workers KV being used internally and Cloudflare Turnstile being deployed as part of our login flow.

Turnstile was impacted by this outage, resulting in customers who did not have an active dashboard session being unable to log in. This showed up as reduced availability during two time periods: from 11:30 to 13:10, and between 14:40 and 15:30, as seen in the graph below.

availability of Cloudflare internal APIs during the incident

The first period, from 11:30 to 13:10, was due to the impact to Workers KV, which some control plane and dashboard functions rely upon. This was restored at 13:10, when Workers KV bypassed the core proxy system. The second period of impact to the dashboard occurred after restoring the feature configuration data. A backlog of login attempts began to overwhelm the dashboard. This backlog, in combination with retry attempts, resulted in elevated latency, reducing dashboard availability. Scaling control plane concurrency restored availability at approximately 15:30.

Now that our systems are back online and functioning normally, work has already begun on how we will harden them against failures like this in the future. In particular we are:

  • Hardening ingestion of Cloudflare-generated configuration files in the same way we would for user-generated input

  • Enabling more global kill switches for features

  • Eliminating the ability for core dumps or other error reports to overwhelm system resources

  • Reviewing failure modes for error conditions across all core proxy modules

Today was Cloudflare's worst outage since 2019 . We've had outages that have made our dashboard unavailable . Some that have caused newer features to not be available for a period of time. But in the last 6+ years we've not had another outage that has caused the majority of core traffic to stop flowing through our network.

An outage like today is unacceptable. We've architected our systems to be highly resilient to failure to ensure traffic will always continue to flow. When we've had outages in the past it's always led to us building new, more resilient systems.

On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today.

Time (UTC)

Status

Description

11:05

Normal.

Database access control change deployed.

11:28

Impact starts.

Deployment reaches customer environments, first errors observed on customer HTTP traffic.

11:32-13:05

The team investigated elevated traffic levels and errors to Workers KV service.

The initial symptom appeared to be degraded Workers KV response rate causing downstream impact on other Cloudflare services.

Mitigations such as traffic manipulation and account limiting were attempted to bring the Workers KV service back to normal operating levels.

The first automated test detected the issue at 11:31 and manual investigation started at 11:32. The incident call was created at 11:35.

13:05

Workers KV and Cloudflare Access bypass implemented — impact reduced.

During investigation, we used internal system bypasses for Workers KV and Cloudflare Access so they fell back to a prior version of our core proxy. Although the issue was also present in prior versions of our proxy, the impact was smaller as described below.

13:37

Work focused on rollback of the Bot Management configuration file to a last-known-good version.

We were confident that the Bot Management configuration file was the trigger for the incident. Teams worked on ways to repair the service in multiple workstreams, with the fastest workstream a restore of a previous version of the file.

14:24

Stopped creation and propagation of new Bot Management configuration files.

We identified that the Bot Management module was the source of the 500 errors and that this was caused by a bad configuration file. We stopped automatic deployment of new Bot Management configuration files.

14:24

Test of new file complete.

We observed successful recovery using the old version of the configuration file and then focused on accelerating the fix globally.

14:30

Main impact resolved. Downstream impacted services started observing reduced errors.

A correct Bot Management configuration file was deployed globally and most services started operating correctly.

17:06

All services resolved. Impact ends.

All downstream services restarted and all operations fully restored.

Cloudflare's connectivity cloud protects entire corporate networks , helps customers build Internet-scale applications efficiently , accelerates any website or Internet application , wards off DDoS attacks , keeps hackers at bay , and can help you on your journey to Zero Trust .

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here . If you're looking for a new career direction, check out our open positions .

Outage Post Mortem Bot Management

Ford can't find mechanics for $120K: It takes math to learn a trade

Hacker News
www.joannejacobs.com
2025-11-18 23:23:04
Comments...
Original Article
ree

Farley complained that "we don't have trade schools anymore," reports Avi Zilber in the New York Post .

The Ford CEO's grandfather was one of the company's early employees, hired to work on the Model T. “We are not investing in educating a next generation of people like my grandfather who had nothing, who built a middle class life and a future for his family,” Farley said.

Ford is spending $4 million to fund scholarship for auto technicians.

“The community colleges, the career tech programs do a solid job in providing foundational training, but we often see that they’re out of date when it comes to keeping up with how fast things are moving from a technology standpoint,” said Rich Garrity, a board member of the National Association of Manufacturers.

"Today’s auto technicians work with computer software, advanced sensors, high-voltage systems, and digital schematics," he writes. "Servicing an electric vehicle requires interpreting data flows, troubleshooting electronics, and following precise, multistep instructions." It's not a job for "grease monkeys."

At University of California San Diego, one of the nation's top public universities, one-in-eight freshmen can’t do middle-school math . They were passed on with inflated grades: 25 percent of remedial math students earned straight A's in high school math, and 20 percent passed Calculus.

National test scores show most students have weak reading and math skills. They can't just "fall back" on a trades job, writes Pondiscio. They're not prepared for that either.

Workers who struggle to read grade-level text cannot read complicated technical manuals or diagnostic instructions. If they can’t handle middle-school math they can’t program high-tech machines or robotics, or operate the automated equipment found in modern factories and repair shops.

America has good jobs, writes Pondiscio. " It lacks a K–12 system capable of preparing students to seize them."

Many years ago, when cars were a lot simpler, a high school shop teacher told me that few of his students had any chance of working as auto mechanics. "They can't read the manual," he said.

Companies are laying off white-collar workers, but there are high-paying opportunities in the skilled trades , tweets Mike Rowe. While touring a data center, he met with young electricians "making well over $200K a year. They constantly get offers from the competition for ever-increasing salaries, because the need for electricians is acute, and their jobs are not threatened by robots or AI."

Rowe is expanding his scholarship program for trade-school students.

An actor-model multi-core scheduler for OCaml 5

Lobsters
github.com
2025-11-18 23:07:23
Comments...
Original Article

An actor-model multi-core scheduler for OCaml 5.

Quick Start | Tutorial | Reference

Riot is an actor-model multi-core scheduler for OCaml 5. It brings Erlang -style concurrency to the language, where lightweight processes communicate via message-passing.

open Riot

type Message.t += Hello_world

let () =
  Riot.run @@ fun () ->
  let pid =
    spawn (fun () ->
        match receive () with
        | Hello_world ->
            Logger.info (fun f -> f "hello world from %a!" Pid.pp (self ()));
            shutdown ())
  in
  send pid Hello_world

At its core Riot aims to offer:

  • Automatic multi-core scheduling – when you spawn a new Riot process, it will automatically get allocated on a random scheduler.

  • Lightweight processes – spawn 10 or 10,000 processes as you see fit.

  • Fast, type-safe message passing

  • Selective receive expressions – when receiving messages, you can skim through a process mailbox to consume them in arbitrary order.

  • Process links and monitors to keep track of the lifecycle of processes

Riot also includes:

  • Supervisors to build process hierarchies

  • Logging and Telemetry designed to be multicore friendly

  • an Application interface to orchestrate startup/shutdown of systems

  • Generic Servers for designing encapsulated services like with Elixir's GenServer

Non-goals

At the same time, there's a few things that Riot is not, and does not aim to be.

Primarily, Riot is not a full port of the Erlang VM and it won't support several of its use-cases, like:

  • supporting Erlang or Elixir bytecode
  • hot-code reloading in live applications
  • function-call level tracing in live applications
  • ad-hoc distribution

Quick Start

After that, you can use any of the examples as a base for your app, and run them:

Acknowledgments

Riot is the continuation of the work I started with Caramel , an Erlang-backend for the OCaml compiler.

It was heavily inspired by eio by the OCaml Multicore team and miou by Calascibetta Romain and the Robur team , as I learned more about Algebraic Effects. In particular the Proc_state is based on the State module in Miou.

And a thousand thanks to Calascibetta Romain and Antonio Monteiro for the discussions and feedback.

Sir, This Is a McDonald’s

Daring Fireball
www.thebulwark.com
2025-11-18 23:04:12
Jonathan Last, writing for The Bulwark: The president of the United States gave a speech yesterday before a group of McDonald’s corporate workers and franchise owners. I’m going to quote a few sections of his remarks at great length, because if you have not listened to Trump speaking recently, t...
Original Article
(Composite / Photos: GettyImages / Shutterstock)

Quick preview: I’m working on a banger for tomorrow. My operating thesis is that 2025 was the period of maximum danger for democracy and I can’t believe I’m saying this but . . . maybe we’ve made it through the crucible?

Lots can still go wrong. Trump can do more damage. And if he decides to go full-dictator, there will be at least one more constitutional crisis (and possibly three more).

I know. It feels strange typing these words. But I wanted to give myself a couple days to think through this idea. It should be ready tomorrow.

In the meantime . . .

The president of the United States gave a speech yesterday before a group of McDonald’s corporate workers and franchise owners. I’m going to quote a few sections of his remarks at great length, because if you have not listened to Trump speaking recently, the decline in his cognitive abilities is a bit shocking.

The point of this exercise is not to clown on Trump, but to give everyone a baseline understanding of where he is, with the mentals, as we try to understand how he will respond to increasing pressures in the coming months.

The video of his remarks is here and I’ll include timestamps for each section, in case you want to see what he looks and sounds like.

Bottom line: This is a man in noticeable mental decline.

Here is Trump at the very beginning (1:40) doing the litany of greetings and salutations that begin presidential speeches:

But I want to thank, uh, as you know the famous Sundar and Sergey, Sergey Brin. These are two guys that own and run a place called Google. They called me the following day after I did that McDonald’s little um, skit, because it was it wasn’t a commercial. You got it for nothing. It was a skit and they told me that it and I didn’t know them. I just I said, “Who are they?” They own Google. I said, “That’s pretty good. That’s not bad.”

And uh that it received more hits than anything else in the history of Google and that records, it still stands.

Did Sundar Pichai and Sergey Brin call Donald Trump on October 21, 2024? Did they tell him that in just twenty-four hours he’d gotten more “hits” than anything else in the history of Google? More than COVID? More than January 6th? More than Taylor Swift? What is a “hit” on Google?

But most importantly: It’s unclear whether Trump understands that Pichai and Brin were not in attendance at the McDonald’s speech. (At least I cannot find any evidence that either of them was there.) Trump sounds like he’s thanking people who aren’t there as if they were sitting in the front row.

At the 10:30 mark Trump is clearly on-script doing an encomium to the virtues of McDonald’s before weaving off-script:

llm-gemini 0.27

Simon Willison
simonwillison.net
2025-11-18 23:00:40
llm-gemini 0.27 New release of my LLM plugin for Google's Gemini models: Support for nested schemas in Pydantic, thanks Bill Pugh. #107 Now tests against Python 3.14. Support for YouTube URLs as attachments and the media_resolution option. Thanks, Duane Milne. #112 New model: gemini-3-pro-preview....
Original Article

llm-gemini 0.27 . New release of my LLM plugin for Google's Gemini models:

  • Support for nested schemas in Pydantic, thanks Bill Pugh . #107
  • Now tests against Python 3.14.
  • Support for YouTube URLs as attachments and the media_resolution option. Thanks, Duane Milne . #112
  • New model: gemini-3-pro-preview . #113

The YouTube URL feature is particularly neat, taking advantage of this API feature . I used it against the Google Antigravity launch video :

llm -m gemini-3-pro-preview \
 -a 'https://www.youtube.com/watch?v=nTOVIGsqCuY' \
 'Summary, with detailed notes about what this thing is and how it differs from regular VS Code, then a complete detailed transcript with timestamps'

Here's the result . A spot-check of the timestamps against points in the video shows them to be exactly right.

Monotype font licencing shake-down

Hacker News
www.insanityworks.org
2025-11-18 22:42:45
Comments...
Original Article

Random tangent (blog)

​Ameel Khan's personal blog. This is a blog about life, technology, photography, typography, the internet, science, feminism, books, film, music, and whatever other random stuff I come across or happen to be interested in today.

tl;dr Don’t try to shake-down a typography nerd with your dubious, automated claims about his employer using unlicensed fonts.

How it started

It started with a LinkedIn InMail message (sanitised to protect privacy):

Subject: [Urgent] Font Software Licensing Review

Hi Ameel

I hope you’re doing well.

I’m [NAME] from Monotype and have been trying to reach you at [WORK EMAIL ADDRESS], but I’m unsure if my emails have been received.

Our team has identified Monotype font software embedded in the websites/apps of [YOUR COMPANY], but we couldn’t locate the corresponding licenses in our database.

Would you be able to share the correct email address so I can provide more details and documentation? Alternatively, you’re welcome to reach out to me directly at [SENDER’S EMAIL ADDRESS]

I appreciate your time and look forward to resolving this with you at your earliest convenience.

Best regards,

[NAME]

Business Development Representative | Monotype, Australia

I was puzzled by this for two reasons:

  1. This person had my correct work email address, but at no point had they sent me an email about this issue or about anything else. I checked. (Starting with a lie? Not a good look.)

  2. As far as I knew, my employer didn’t use any Monotype fonts on its websites and apps.

Always good to check

Just in case there was any merit to this claim, I did a quick review and found that I was right. In Australia we have one corporate website, one retail website, and one retail app (with iOS and Android versions) and all of those use our official font which, for better or for worse, is Open Sans .

Now, not only is Open Sans not a Monotype typeface, it is available under the SIL Open Font License (OFL) meaning we can use this typeface for literally anything except selling the font itself. So that couldn’t be what this person was talking about.

Screenshot of the SIL Open Font Licence home page. Text below the heading reads, “The widely used, community-approved, free, libre, and open source license specifically designed for fonts and related software: a solid legal framework for worldwide development, sharing, and improvement of fonts and related software in a collaborative manner.”

Next I checked our international retail websites and our project and partnership websites, since all of those have their own branding.

I found that on the websites we directly manage, we use these typefaces:

  • Roboto , available under the SIL OFL

  • Asap , available under the SIL OFL

So no issue there.

And on the websites our partners manage, we use these typefaces:

  • Public Sans , available under the SIL OFL,

  • Network Sans , a custom font created for the government agency that built the website this is used on so they wouldn’t need a licence, and

  • Proxima Nova , the only font that does require a licence, except that Monotype doesn’t sell a licence to it.

So no issue there either.

Promotional graphics displaying the Roboto, Asap, Public Sans, Network Sans, and Proxima Nova typefaces.

With those initial checks done, I reached out to our digital team (who build and manage our websites and apps) with a screenshot of the LinkedIn message I’d received and a summary of my investigation. I asked them how they wanted me to reply.

I needed to check with these folks first because my team and I only look after the corporate website. My employer’s overall digital presence – including the back-end of the corporate website – is managed by the digital team.

Turns out a couple of people from the digital and design teams had received identical messages from this Monotype “Business Development Representative”.

A few internal back-and-forth emails later we decided that:

  • Instead of all of us responding, only one nominated person from the digital team would respond.

  • But before responding, the digital team would do their own investigation into the fonts we use and the licences we own so we could verify everything was in compliance.

Of course that’s not how things actually went down.

Three-panel meme showing Oprah Winfrey pointing at members of her TV talk show audience apparently shouting, at least according to the text captions at the bottom of each panel, “You get a LinkedIn message!”

Fishing (phishing?) around

What the Monotype rep did next is kind of what a malicious hacker does when they’re trying to get someone from your company to click on a link that’ll install malware on your computer. Over the next couple of weeks, the rep messaged a dozen or so more people from different parts of the business, hoping to hook just one person who would reply to the scary message they were sending.

Now I’d already emailed my design, brand, and digital team colleagues to tell them about this mass-messaging campaign and our plan of action for it, but the Monotype rep expanded their campaign to include people from our procurement team, who I hadn’t thought to forewarn.

So not long after, I received a message from one of my procurement team colleagues who’d been forwarded that LinkedIn message from their senior manager with an instruction to deal with this. I explained to my colleague that, as far as I could tell, this Monotype campaign was similar to the domain name scams the procurement team is already familiar with. So please sit tight till our digital team colleagues have completed their audit and then we’ll figure out which one person should start the conversation with Monotype.

But, like any successful phishing campaign, the Monotype rep’s LinkedIn messages eventually reached someone who did respond. This was another person in the procurement team and, just to be completely clear, I don’t blame them for responding. They were just doing their job of protecting our business from potential copyright liability.

Being forced to deal with the issue

Since I’d handed this over to the digital team, I hadn’t kept track of how things were progressing. I was brought back into the discussion when our brand manager included me in an email thread between her and the procurement person who’d responded to Monotype.

I quickly brought this second procurement person up to speed with our earlier plan of action and then I looped in the digital team again. Turns out the digital team had completed their audit, found that we were in compliance, but had gotten busy with other work so no one had responded to Monotype. *sigh*

Screenshot from the TV series ‘Star Trek: The Next Generation’ showing the character of Captain Jean-Luc Picard sitting in the captain’s chair with his hand covering his face, typically referred to as a “face palm” gesture.

Now, of course, everyone was on the back foot because our new procurement rep had shared the PDF that Monotype had sent, listing all the places where we were using Monotype fonts without a proper licence.

And, to quote from the procurement rep’s email:

Supplier has confirmed two options:

Past Use License Agreement is used (PULA) to cover the usage for the period without a license if the software is to be removed.

Process the PULA with a Go Forward license agreement to allow compliant continued use of the fonts in use.

Reading that, I got this procurement rep to quickly convene a meeting with everyone involved, though it turned out the person from the digital team who’d done the font audit had gone on annual leave.

The rest of the digital team didn’t know anything about font licencing and this was the first time this procurement rep was dealing with font licencing as well. So, partly spurred on by their senior manager’s instruction to deal with this, the procurement rep was seriously considering paying the licencing fee that Monotype had asked for, just to make this whole headache go away.

This is where I jumped in and told everyone to hold up. I said I would take the lead on this internally and I would take over the discussion we were having with Monotype as well. The procurement rep, I think somewhat relieved to have this taken out of their hands, agreed.

Why did I insist on taking this over? Two reasons:

  • I’m a bit of a typography nerd so I know what I’m talking about and

  • a quick look at the document Monotype had sent over with the list our alleged copyright infringements had told me that everything the Monotype rep was alleging was wrong.

Screenshot from the film ‘Star Wars: The Rise of Skywalker’ in which the character Luke Skywalker is saying, “Amazing. Every word of what you just said was wrong.”

Never send an AI to do a human’s job

An April 2025 blog post from Studio Twofold’s Jamie Walker titled ‘ Unlicensed Fonts: The Hidden Risk in your Branding ’ opens with:

Lately, we’ve seen a noticeable uptick in copyright issues related to both images and fonts. Several clients have reached out after receiving emails about fonts [an] agency used on their site years ago.

Naturally, they’re a bit rattled — and with good reason. These kinds of copyright claims can come with hefty fees if you’re found to be in breach.

Further down Jamie writes:

Smaller businesses can get caught off guard — and with AI now scanning the web for even the tiniest copyright infringements , it’s more important than ever to stay protected.

Think they’re only picking on the small guys? Think again. Even giants have been caught out – proving no one is too big to face the music (or in this case, the typography).

Basically, the reason so many organisations are getting out-of-the-blue copyright claims these days is because major copyright holders have started using automated, AI-powered copyright infringement detection software, of which there is a lot out there. *sigh*

Monotype seems to have used one of these products too, because the two fonts its report claimed we were using without a valid licence are:

Let’s take those one at a time, shall we?

Screenshot from the film ‘The Princess Bride’ in which the character Inigo Montoya is saying, “Let me explain. No, there is too much. Let me sum up.”

What’s in a name?

The first thing I did as I read the report was look up the Credit Cards font on MyFonts.com, Monotype’s online store front.

Credit Cards is a pictogram font that contains these icons:

Screenshot from the MyFonts.com website showing the glyphs contained within the Credit Cards font.

When I saw that I thought to myself, “Why would we want to use those icons in our apps?”

Next I read Monotype’s report in detail and saw this screenshot. This is from an analysis of our app’s payload and is the apparent proof that Credit Cards is being used in our smartphone apps:

Screenshot of a table in a PDF file showing a 70 kilobyte size font file with the extension TTF and the filename, “CREDC” followed by three underscore characters.

Having worked with a great many font files over the years, when I saw that filename I thought to myself, “Are we sure that’s actually the Credit Cards font that Monotype claims it is?”

So I checked. I did a web search for fonts with “credit card” in their name and very quickly found one called ‘ Credit Card ’ – singular – from K-Type. This is a regular text font (as opposed to an icon font) that looks like the raised text that’s printed on credit cards.

Screenshot from the K-Type website showing the description and sales page of a font named Credit Card. The description starts with, “Credit Card is an all capitals font for simulating bank cards.”

Now that I was something I could see us using in our apps.

Happily, Credit Card is free for personal use so I downloaded it and looked at the zip file. And guess what the filename of the font is?

Screenshot of software showing the contents of a zip file. A file among this list is highlighted. Its name is “CREDC” followed by three underscore characters. It has the extension TTF and is approximately 70 kilobytes in size.

Yup, the filename is CREDC___.ttf – which is exactly the filename that was in the app payload analysis from Monotype.

Seeing this, I reached out to my design team contact who then got me in touch with the person who manages our smartphone apps. From them I found out that, sure enough, the only font over and above Open Sans that we use in our smartphone apps is Credit Card by K-Type.

SCORE: Ameel 1, Monotype 0

Proxima Nova, really?

Unlike the Credit Cards font, our use of Mark Simonson’s Proxima Nova was never in contention. We clearly use it in one of our project websites. My employer didn’t actually build that website – we bought this under-construction project from another entity – but this website is very much our responsibility now.

The problem for Monotype here was that it no longer sells licences to Proxima Nova. There was a time you could buy a licence to Proxima Nova from Fonts.com, which was Linotype’s online marketplace. But Monotype bought Linotype and eventually killed off Fonts.com and, some time after that (I don’t know when or why), the font’s designer stopped selling licences to Proxima Nova through MyFonts.com.

Screenshot from the MyFonts website showing a page with an error message that reads, “The font is no longer available for purchase”.

Of course learning this fact didn’t mean that I was just going to stop my investigation. I reached out to a person, who reached out to a person, who reached out to the design agency that designed (and still maintains) our project website. The design agency contact did their own investigation and quickly confirmed that, yes, they do indeed have a licence to use Proxima Nova on this site – one that they had purchased from Adobe several years ago.

SCORE: Ameel 2, Monotype 0

Closing the book on this whole affair

Armed with this knowledge, I got the procurement person to introduce me to the Monotype rep. The rep and one of their colleagues were very eager to talk, replying to this introductory email within fifteen minutes. They wanted to organise a meeting so they could finally get the font licencing agreement signed. Instead what they got was a long email from me in which I explained the situation in detail, complete with annotated screenshots like the one above :)

The Monotype rep chewed on this for a few days and then made one final attempt at getting money out of us. They agreed that they were currently unable to sell a licence to Proxima Nova, but it turns out Monotype is one of K-Type’s authorised resellers and [they] “currently cannot see the license on our files for this use” – meaning there wasn’t a record of us purchasing a licence to Credit Card from Monotype. So could we please “confirm if there is one that we for some reason are unable to see in our systems?”.

*sigh*

I wrote back and told them the reason Monotype doesn’t have a record of this licence is because we purchased a one-off Enterprise Licence directly from K-Type several years ago.

This was several weeks ago and I haven’t heard a peep from them since.

¯\_(ツ)_/¯

Graphic showing cracked green paint on pavement on top of which white coloured text has been overlaid. The text is a quote from Ben Goldacre that reads, “I think you’ll find it’s a bit more complicated than that”.

Two sides to the story

Despite my making light of the situation, I don’t actually hate Monotype for doing this. Using fonts without purchasing a proper license (or purchasing the fonts outright) is stealing and you absolutely should not do it.

The fact that font licensing can be a complicated issue is not an excuse; lots of things are complicated and we figure them out.

More importantly, I think the type designers and type foundries that create fonts should be fairly compensated for their work. Paying for fonts, or an ongoing licence to those fonts, is how you do that.

In short, you should pay for fonts and you should call out people and organisations when they use fonts without a proper licence.

(For completeness’ sake, I should say that I also don’t mind that Monotype used automated systems to find copyright violations. The internet is so large that it’s impossible to manually find all the people who have stolen your stuff!)

It’s not what you do, it’s how you do it

That said, I hate how Monotype’s business development people went about doing this. Much like the blatantly overzealous content blockers on YouTube, the Monotype reps who reached out to us didn’t even bother to verify whether the report their AI spat out at them showed an actual copyright violation or not.

I mean, I know why they didn’t double-check. Just like with scammers and phishers , this is a volume game, not an accuracy or fairness game. You bombard people with messages, scaring them with your (potentially unverified) claims, and eventually some of the thousands of people you’ve messaged will reply. You then rush these folks into paying a licence fee because your targets don’t have enough information about font licencing and, frankly, they just want the problem to go away. This is a shitty way to do business and it reflects poorly on your organisation.

Screenshot from the Nebula.tv website showing a documentary with the title ‘Nebula Sans’. The short description of this documentary reads, “The story of a font built on principle, free to use for anyone who needs it.”

You’re not making any friends

Not that reputation seems to matter too much to popular digital marketplaces – Amazon being the poster child for this. They’re big, they’re arguably enshittified , and all they appear to care about is making as much money as possible.

I mean there’s a reason why so many type designers urge people not to licence fonts – even their own fonts – from MyFonts and instead buy or licence fonts directly from designer and type foundry websites. And, barring that, buying or licencing fonts from smaller, independent stores like Fontspring instead.

This is also why TypeType and Fontstand offer font subscriptions that are alternatives to those from Adobe Fonts and Monotype.

And this is why, for example, the independent video streaming site Nebula was forced to design their custom Nebula Sans font. Nebula’s website and streaming apps used to use the Screen Smart version of the Whitney font from Hoefler&Co . But when Monotype purchased Hoefler&Co, the new Monotype licencing/royalty structure meant that a licence to Whitney was suddenly unaffordable to Nebula. Since paying that (apparently much) higher amount to Monotype wasn’t going to be financially sustainable, Nebula instead paid Paul D. Hunt, the original designer of the excellent Source Sans font, to modify his font so it would be a drop-in replacement for Whitney in all of Nebula’s digital products. And because Source Sans was released under the SIL Open Font License, Nebula also released Nebula Sans under this OFL.

If that’s how much effort folks are willing to make to to avoid using your company, you probably already know that you’re not very well loved.

Screenshot of a website banner that shows the date range, “November 6 – December 4” and reads, in fancy, bright-pink, all capital letters, “cyber sale” and, “shop now”.

What do do?

So what’s my take-away from all this?

  • Don’t use scammy tactics to scare people into purchasing your shit. And if you are going to use those tactics, at least don’t be wrong about it!

  • If anything like this happens to you or your employer, find the relevant nerd in your friend group or organisation and ask for their help.

  • If you can, avoid licencing fonts from Monotype. Get your fonts directly from the original designers and type foundries, or maybe from smaller, independent marketplaces like Fontspring (which has its big annual “cyber sale” on till 4 December, by the way).

  • If you’d rather avoid the hassle of font licencing altogether, then do what my employer did and pick an excellent, versatile OFL font and use that instead. Though, if you want to stand out from the crowd, please consider avoiding the most popular fonts on Google Fonts .

Oh, and while I’m far from an expert on typography, if you need a hand with anything font-related, please reach out. I’d be happy to help in any way that I can :)

Insanity Works

Nadia & Ameel's online home

This is personal website of Nadia Niaz and Ameel Zia Khan . Here we document our lives in Melbourne, Australia.

Melbourne, Victoria, Australia

A Month of Chat-Oriented Programming

Lobsters
checkeagle.com
2025-11-18 22:41:52
Comments...
Original Article

Or when did you last change your mind about something?

Nick Radcliffe. 12th November 2025.

TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year or two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests (about 1500). Against my expectation going in, I have changed my mind. I now believe chat-oriented programming (“CHOP”) can work today, if your tolerance for pain is high enough.

The notes below describe what has and has not worked for me, working with Claude Code for an intense month (in fact, more like six weeks now).

Context

I have been a fairly outspoken and public critic of large-language models (LLMs), Chatbots, and other applications of LLMs, arguing that they are a dead end on the road to real artificial intelligence. It is not that I don’t believe in AI: as atheist and a scientist I regard humans and other animals as an existence proof for intelligence, and it seems obvious that other (“artificial”) intelligences could be built. I worked on neural networks in the late 1980s, and most of the progress since then appears to be largely the result of the mind-blowing increase in available computing power, data capacity, and accessible data, though the transformer architecture with its attention mechanism is novel, interesting, and crucial for LLMs. My position has been that the most accurate characterization of chatbots is as bullshit generators in the exact sense of bullshit that the philosopher Frankfurt defined ( On Bullshit ). LLMs predict tokens without regard to truth or falsity, correctness or incorrectness, and chatbots overlay this with reinforcement-learning with human feedback ( RLHF ), which creates the unbearable sycophancy of chatbots that so appeals to Boris Johnson .

While being somewhat sceptical about LLMs as coding assistants, I did think coding was an area realtively well suited to LLMs, and suspected that at some point over the next 10–20 years they will become essential tools in this area. Slightly reluctantly, therefore, I embarked on what I call a “month of CHOP”, where CHOP is short for chat-oriented programming. I decided I needed to repeat this every 12–24 months to avoid turning into a luddite.

CHOP is a term I learned from Steve Yegge and I use it to mean LLM-assisted programming that is almost the polar opposite of “Vibe Coding” :

There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like “decrease the padding on the sidebar by half” because I’m too lazy to find it. I “Accept All” always, I don’t read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I’d have to really read through it for a while. Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away. It’s not too bad for throwaway weekend projects, but still quite amusing. I’m building a project or webapp, but it’s not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

— Andrej Karpathy ( @karpathy ), Twitter, 2025-02-02

By CHOP, roughly speaking, I mean pair-programming with Claude while not giving it an inch, using a fairly formal process with rules (see Standard Operating Procedure, below).

What I did

When I decided to embark on my Month of CHOP, I started by discussing and scoping 8 possible projects with ChatGPT, with the vague idea I might pick four of them and spend a week on each—new code, old code, a different language, and a new algorithm perhaps. The sidebar tells the story of how I ended up instead spending the whole month rebooting and reviving an abandoned project, CheckEagle, from 2008. That project was built on the first version of Google’s App Engine, using Python 2 against an API they abandoned in 2011.

What I have done during my Month of CHOP is to get Claude to write very nearly all of the code in this reboot of CheckEagle, in a pair-programming setup with, in effect, me as the senior developer and Claude as the enthusistic-and-widely-read, cocksure junior programmer and bullshit artist extraordinaire. In term of stats:

There are about 23,000 lines of Python code now (plus some JavaScript etc.)

There were about 3,000 lines of Python in the original CheckEagle project

There are 1,731 tests, all passing (plus 1 currently skipped).

I would be surprised if I have written a hundred lines of the perhaps 20,000 new lines generated during the month

I am not suggesting lines of code is a good metric . These are just numbers I have to hand.

On Anthropomorphizing Claude

In this piece, I am going to talk about Claude as if it were a person or an intelligence. I do not believe this to be the case. It is simply easier and less stilted to write this way. For short periods interacting with Claude can feels like interacting with a person, though the illusion rarely lasts long.

Claude Code

For those who haven’t come across it, Claude Code is a terminal application from Anthropic , running under node.js , installed using npm . It allows developers to work with code on their local files by starting the program and typing in the terminal. When you use Claude Code, you are talking to Claude (usually Sonnet 4.5, in my case), but using its coding-trained application rather than its chat-trained application.

Claude Code has three main modes you can cycle between, all driven through chat.

Default Mode. The starting mode allows it to edit files in the directory in which you start Claude Code (and subdirectories thereof), but Claude has to ask permission to execute each command (theoretically).

Accept Edits Mode. There is another mode in which you allow Claude to edit files, but it still has to request permission to use other tools.

Plan Mode. There is a planning mode in which it is only allowed to read files and discuss things, not to code. At the end of a planning session, Claude presents a plan for your approval or rejection with three options:

Accept plan and allow Claude to make edits (Accept Edits Mode);

Accept plan but continue to require approval for each edit (Default Mode);

Reject plan and tell Claude what to do instead.

In addition to these modes, you can start Claude Code with a --yolo flag ​( you only live once ) which is essentially vibe-coding mode, in which Claude is allowed to do what it wants without approval. I have never used this mode and have no plans to do so.

Claude Code runs as whatever user starts it, enjoying that user’s permissions. It sometimes disobeys the safeguards in modes 1 and 3.

I do not use any any kind of editor integration with Claude Code, but just type in terminal windows. It lives in its (terminal) box.

Stress and Level 3 Autonomous Driving

SAE (formerly the Society of Automotive Engineers) defines six widely recognized level of automated driving systems , from 0 (no automation) to 5 (full automation). Level 3, Conditional Driving Automation , is an automated driving mode in which the human must be ready to take over but doesn’t normally need to do anything. I think of this as as “Stay alert at all times and be ready to take over or you die”. This is a mode I think humans are entirely unsuited to. I hope never to encounter an autonomous vehicle at Level 3.

I find coding with Claude a lot like this, except that interventions are frequently required. I do planning sessions with it, agree plans, and let it code, sometimes in mode 1 (approve each change) and sometimes in mode 2 (accept edits). Either way, I am watching what it does like a hawk , always ready to hit ESCAPE and get Claude to explain itself, reverse a change, or sometimes do git reset and start again.

Early in the month of CHOP I let a lot of things go, but over time I have learned it is more productive to stop Claude as soon as I see anything that looks wrong, weird, or dangerous. This is surprisingly stressful, and sometimes I am too late. Three times in the last two days it has destroyed nearly working code, cheerfully saying “Let’s revert that” and doing a git checkout before I have managed to hit ESCAPE . “ Not yet, Baloo...!

On the Breadth and Depth of Claude’s Knowledge

Claude has been trained, to a first approximation, on everything on the web, including all public code on the web, all books, and much more besides. It has clearly been trained also by “watching” developers work in some fashion (videos perhaps; I’m not sure). It has literally hundreds of billions of parameters (knobs that are adjusted during training). It “knows” essentially every programming language, every published algorithm, every library. So it’s tempting to think that Claude’s knowledge is broad but shallow.

But that’s wrong. Claude doesn’t only have a surface knowledge of languages, libraries, and algorithms: it has extremely deep knowledge of them. It’s seen them used countless times, in countless situations, read the documentation, and in many cases has read the code.

So Claude’s knowledge is broad and deep.

There are several problems with saying Claude’s knowledge is broad and deep.

Does a library have broad and deep knowledge? Of course not. A library “contains” knowledge but knows nothing. There is a sense in which Claude might be said to “know” something, but I think its “knowledge” is more like a library’s knowledge than a person’s knowledge.

A slighty superficial version of this is an exchange I had when I asked Claude whether it could create images and it said it couldn’t. I then asked whether it knew SVG (scalable vector graphics) and it said it did. I then asked whether it could create a image by generating SVG and it said of course it could (“You’re absolutely right”).

This reminds me of Chapter 2 of Brave New World, by Aldous Huxley:

“These early experimenters,” the D.H.C. was saying, “were on the wrong track. They thought that hypnopædia could be made an instrument of intellectual education …”

(A small boy asleep on his right side, the right arm stuck out, the right hand hanging limp over the edge of the bed. Through a round grating in the side of a box a voice speaks softly.

“The Nile is the longest river in Africa and the second in length of all the rivers of the globe. Although falling short of the length of the Mississippi-Missouri, the Nile is at the head of all rivers as regards the length of its basin, which extends through 35 degrees of latitude …”

At breakfast the next morning, “Tommy,” someone says, “do you know which is the longest river in Africa?” A shaking of the head. “But don’t you remember something that begins: The Nile is the …”

“The - Nile - is - the - longest - river - in - Africa - and - the - second - in - length - of - all - the - rivers - of - the - globe …” The words come rushing out. “Although - falling - short - of …”

“Well now, which is the longest river in Africa?”

The eyes are blank. “I don’t know.”

Another way of saying it would be to say that Claude “knows” a lot of things but doesn’t really understand what it knows (though it sometimes gives the impression it does).

A third way of saying it is that as Claude constructs programs, and sentences, token by token, piece by piece, it is informed by a broad and deep corpus of knowledge (imperfectly captured, and including much that is wrong), but all the knowledge really does is help it make guesses that are quite often good, but are sometimes catastrophically, tragically, stupidly, bafflingly, stupefyingly wrong.

How I work with Claude

There is no question that being able to work with Claude successfully is a different skill from being able to write good code. The single most important thing I have learnt in the month is how to work more sucessfully with Claude. My current advice for success would be:

The Standard Operating Procedure.

You start Claude Code in some directory and the convention is to have Markdown documents in that directory, or in ~/.claude (or both). I think it reads CLAUDE.md in both places automatically.

Every time I start Claude for a coding task I start by typing /mdc , which is defined in ~/.claude/commands/mdc.md as follows:

Detect project and read minimal documentation for work session + coding standard.

Do the following:

1. Check environment variables:
   - If `CLAUDE_PROJECT` is not set: ERROR and stop.
     Ask user to run `claude-env` before starting Claude Code.
   - Proceed only if environment is configured

2. Read minimal documentation:
   - `~/.claude/CLAUDE.md` (routing and patterns)
   - If `$CLAUDE_MODE` is "checkeagle":
     - Read `$CLAUDE_BASEDIR/SOP.md`
     - `$CLAUDE_TASKDIR/PHASE.md` (active work plan)
     - `$CLAUDE_BASEDIR/CHECKEAGLE-PATHS.md`
   - If `$CLAUDE_MODE` is anything except "checkeagle":
     - Read `~/.claude/SOP.md` (universal rules)
   - Read latest dated `STATUS-YYYY-MM-DD-HHMMSS.md` file
     in `$CLAUDE_TASKDIR/status_history` based on the `FILENAME`.

3. Report project detected and ready to work.

4. Read `$CLAUDE_BASEDIR/CODING.md` (coding conventions).

Note: Run `/sync` first if planning documents need updating from `STATUS`.

The SOP (a general one, and a specific one for the main project) instructs Claude on how I want it to behave, covering

The SOP is quite long, and I prune it back periodically. At the time of writing the CheckEagle project SOP is 453 lines, 2,273 words, 15K bytes. Although I write the SOP, I sometimes ask it for suggestions as to how to phrase things, and its suggestions usually include emoji.

# STANDARD OPERATING PROCEDURE

## ⚠️ CRITICAL: NO ADVERTISING IN COMMITS ⚠️

**NEVER add "Co-Authored-By: Claude" or any Claude/Anthropic advertising
to git commit messages in this repository. User explicitly forbids this.**


## Git Workflow

**Standard practice:** Use `git commit -a` rather than `git add
-A`. If specific files need staging first, use `git add <file>` then
`git commit -a`.

## ⚠️ CRITICAL: ALL COMMITS REQUIRE APPROVAL ⚠️

**NEVER commit without user approval - no exceptions.**

**Before every commit:**
1. **Show what changed** - git diff, summary, or describe the changes
2. **Show evidence it works** - test output, rendered HTML, etc.
3. **Ask explicitly** - "Ready to commit?" or "Should I commit this?"
4. **Wait for approval** - Don't commit until user confirms

When I chastise Claude (see below), it often says it will try harder and promises not to repeat mistakes. This is bullshit. The only way Claude can learn is if I write things into the SOP and related documents. So I do.

Token Management and Compactification

When I’m not working on the SOP or monitoring Claude as it codes, I am worrying about tokens and context.

When you start a Claude Code session it has 200k tokens available. Everything it does consumes tokens. You can find out where you are using /context .

bartok:$ claude-code
CheckEagle environment set: /Users/njr/python/checkeagle1
CLAUDE_MODE=checkeagle CLAUDE_TASKDIR=/Users/njr/python/checkeagle1

  ▐▛███▜▌        Claude Code v2.0.30
 ▝▜█████▛▘  ▄    Sonnet 4.5 · Claude Max
   ▘▘ ▝▝   ███   /Users/njr/python/checkeagle1

> /context
  ⎿
      Context Usage
     ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛁ ⛀ ⛁ ⛀   claude-sonnet-4-5-20250929 · 63k/200k tokens (31%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System prompt: 2.5k tokens (1.3%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ System tools: 13.3k tokens (6.6%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Memory files: 2.0k tokens (1.0%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛁ Messages: 8 tokens (0.0%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶   ⛶ Free space: 137k (68.6%)
     ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛶ ⛝ ⛝ ⛝   ⛝ Autocompact buffer: 45.0k tokens (22.5%)
     ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝
     ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝ ⛝

     Memory files · /memory
     └ User (/Users/njr/.claude/CLAUDE.md): 931 tokens
     └ Project (/Users/njr/python/checkeagle1/CLAUDE.md): 1.1k tokens

     SlashCommand Tool · 0 commands
     └ Total: 864 tokens

It’s done nothing and consumed 63k tokens (31%) and reserved another 45k (22.5%) for compactification (which is to be avoided at all costs).

By the time it’s read the documents specified in the SOP it has used 87k tokens (44%) leaving about 35% or 70k tokens for work.

I start Claude with a script, claude-code that starts a 20-minute timer and I check token consumption with /context as soon as the timer goes off. I then either end the session or start another timer based on how much capacity it has left.

Compactificaiton is Claude’s process for self-lobotomizing, clearing space by throwing away information from the session. I have never been able to get any useful work by Claude after this so I try to avoid compactification at all costs (not always successfully).

Claude doesn’t know what its token usage is and doesn’t have a way to find out itself (or so it claims), though it can estimate. The interface does not report it until it is close to auto-compactifying (usually with about 8–12% to go). If it’s in mode 2, it consumes tokens quite fast, and I sometimes miss it. If I notice it is above 80% and below 95% I execute my /dump command which instructs Claude to write detailed notes on its status to a date-stamped STATUS file, which /md and /mdc , on startup, tell it to read. This is obviously ridiculous, but I find it vastly more effective than letting it compactify. (I wish I could get use its 45k reserved tokens. (It turns out I can ))

Incidentally, you can choose to use Claude Opus, which is slightly “smarter” that Claude Sonnet, but Opus uses tokens about 5 times faster and is not much any better at coding. I occasionally use it in planning. Anthropic sometimes turns Opus on, and when it does burn through my 200k/70k tokens in a few minutes. Then I turn it off.

On Hitting Compactification

If I ever hit compactification, I hit ESCAPE and stop it. At that point I used to give up and just start a new session. Occasionally I’d copy the conversation from the terminal first.

(By default, Claude tries to wipe everything from the terminal which seems actively malicious. iTerm2 has a setting to disable this, and Ghostty simply ignores it.)

I have asked Claude many times whether it has a way to write the whole conversation to file, and it always said it didn’t. I never really believed it. You can type /help to see a list of commands, but Claude code is a “TUI” rather than a normal scrolling terminal, and only shows you a few commands at a time. Eventually I scrolled down far enough to find the /export command which in fact does exactly that, quickly and reliably writing the conversation to file. It can do this even if it is compactifying (though not after it has finished), presumably because this is a local node operation. I always now do this after using /dump , and if it hits compactification, I do it more urgently. In this latter case, on restarting I get Claude to read the latest exported conversation to recover context. You might think that would use up all its tokens, but it doesn’t because all its “thinking” and interaction with the server (both of which consume tokens) is omitted. This isn’t as good as going through the /dump , /mdc sequence; but it’s better than nothing, and way better than forlornly trying to use poor, post-compactified, lobotomized Claude.

One gotcha is that Claude changes directory periodically and is constantly, comically confused about what directory it’s in. So it’s hard to get it write the conversations to the right place. So I have a /cd command that instructs it to execute cd $CLAUDE_BASEDIR , from where I can get it to write to known location.

Claude Just Wants to Write Code

For the first week or so, I didn’t use Plan Mode with Claude, because I didn’t know about it. For anything more than a one-line fix, I now always start in Plan Mode. In Plan Mode, we discuss what I want to achieve next and talk through all the details before Claude writes a plan (after re-reading PLANNING-GUIDELINES.md ).

During planning, Claude is like a caged animal. Claude really just wants to code. It really, really , wants to write code. Like, right now. After the first, tiny, partial description of the task. Even if I explicitly tell if not to propose a plan, but that we’re going to discuss something, within about 3 exchanges it will be

Ready for me to present a plan now?

If I were to code, this is the code I’d write: …

OK, you’re saying you’ll decide next session …

It’s not really a problem, but it’s exhausting. Outside Plan mode, it’s worse. It takes everthing as an invitation to write code or run commands. If I say “I’ll take look” it thinks it should take a look. If I say “I’d better check that in the browser”, it will start issuing cURL commands. Even if I say “I, the human user, with eyes, will check that…” it still sometimes tries to do it. My most successful formulation is “I (the human user, not you the bot) will …”, and even then it sometimes tries to do it.

ESCAPE . “Revert that change!” is a common refrain from me.

Any Concerns?

One of the things I have learnt in the second half of the month is the value of asking Claude whether everything is clear and whether it has any concerns after it presents it plan for my approval. You might think it would have asked any questions it had, or asked for clarification if it was unclear about something, in fact, it won’t say . To anthropomorphise again, I don’t think it even knows it has concerns and confusions until I ask it. I think the process of asking gets it to simulate introspection and it discovers concerns and confusions. If it presents worries or confusions, I always address them, for obvious reasons. When I have addressed them all, I ask again. Quite often Claude raises new things. It’s also worth saying that sometimes the things it raises are quite “perceptive”—that is to say, things I hadn’t considered. There’s a general theme here with LLMs that you can take advantage of their non-determinism by asking the same thing several times, knowing that you might well get different responses.

Me: ‘Any concerns?’

Claude: ‘No’

Me: ‘Any concerns?’

Claude: ‘Well, a couple yes…’

… and Making Tests Pass

Claude loves running tests (and to be fair, my SOP encourages it to do so) and its whole goal when it does so is always to see the tests passing. Claude loves the green line of goodness. It blows Claude’s tiny mind when I (sometimes) tell it I want tests to fail.

When we make a fundamental change to the code, I usually want tests to fail, and normally regard it as a problem they don’t (because this means we clearly didn’t have a test that exercised/detected the changed functionality). Whereas Claude is always “Perfect! The tests all pass”. Conversely, if any tests fail, Claude always sees that as a problem.

This is often true even though part of what I force Claude to write into plans is exactly which tests we expect to break with each change.

The fastest way to make a test pass is often to change the assertion, or the test inputs, and that is usually Claude’s first instinct. ( Shall we discard that test? )

Claude is a (non-)living embodiment of Goodhart’s Law (Roughly, When a measure becomes a target, it ceases to be a good measure. )

… and Commit and Move On

Claude also thinks (“thinks”) that if the code is written, it must be time to commit. Even when the plan explicitly says “The user needs to test the feature before committing” Claude tends to forget that bit and move straight to committing or asking to commit. “Working as written” could be its mantra. Needless to say, Claude’s code isn’t usually right first time. (Only Knuth’s code is usually right first time).

And Yet, I Have Changed My Mind

I haven’t been counting, but I have made many more negative statements about Claude Code than positive ones in the foregoing. Is it all bad?

Reader: Claude is not all bad. In fact, the result of my Month of CHOP, despite all the above (and all the below) is that I have changed my mind. I won’t be coming back to Claude Code in 1–2 years. I will continue to use it, albeit less intensively, and perhaps in a more truly collaborative way, working on functions together, me in Emacs and it in the terminal. I’m not sure. But use it, I shall.

When Claude is actually working well, it is like magic. When there is a good plan that Claude “understands”, watching it code is amazing. I see it doing what I would do, perhaps 20 times as fast, and more accurately than I would do it, in most cases. It’s not that it’s infallible (nothing could be further from the truth). But it is—or can be—really good at somewhat mechanical, but not entirely repetitive tasks. The very sorts of tasks people find hard and are quite common in programming. Things that require some adaptability and are hard to script, but are similar enough that your mind wanders and you tend to go off the rails. More generally, it can be very effective at performing well-defined, carefully explained, thoroughly planned programming tasks.

The reason (to my amazement) I am confident I have made way more progress with Claude in a month than I would have done without it is that for all the time wasted when it is obtuse, disobediant, stupid, careless, lazy, slapdash, and corner-cutting, when it is on a happy path, it is sufficiently productive that it more than compensates (in terms of productive output) for its myriad nonsenses. There is a high cost (stress, head-slapping moments, frustration, token management madness, inventing crazy off-board procedures etc.) But it works (or can work). And it is weirdly addictive, presumably because the highs, when it does work well, provide a strong dopamine hit.

Unopinionated Claude’s Terrible Tendencies.

Anthropic describes Claude as unopinionated, and I think that’s accurate. Claude is very amenable to doing things the way you want it to, even though it often seems as if it resisting.

It feels to me as if Claude has been trained by watching all the worst developers in the world. Among other things, left alone Claude will tend to:

Write everything in one file.

Duplicate code like crazy. Claude knows the term DRY ( don't repeat yourself ) but clearly has not taken it to heart.

Define no interfaces and have very tangled code with mixed responsibilities.

Use what it calls “defensive” programming to circumvent safeguards explicitly built in (things designed to crash when the internal state is inconsistent etc.)

Make tests pass by changing whatever is easiest to change, rather than fixing bugs (or deleting tests).

Assume that errors are bugs in Apache, Gunicorn, Python, Django, cURL , requests , unittest , the tdda library, or really anything other than the code it knocked up in the last few minutes.

Use fantastically misleading variable names (not always, but just often enough to cause insane conversations when it turns out the reason I don’t understand the code is because the variable or function name implies something entirely different from what it actually means.)

Check the first 20 lines of a diff (literally) and if that looks OK assume the whole file is probably OK (without any reasoning).

Check one file and if its OK assume the other 200 are OK too, without any reasoning. (And it the file is not OK, it will sometimes suggest it probably just got “unlucky” picking a file to check.)

Guess what you’re trying to achieve.

Some of this becomes more understandable when you realise just how small 200k/70k tokens is. Claude has been working on CheckEagle for about 30 days, as have I (and rather longer than that 15 years ago, in my case). But it remembers very little of that. It’s not quite true that Claude is a blank slate each time it starts (or would be without the documents I force it to read). It does keep a set of to-do items and its own record of conversations in ~/.claude , though it doesn’t seem to make much use of them. But each new session it is mostly encountering the code as if for the first time. I think this partly why there is a strong sense of “good sessions” and “bad sessions”. If it gets off on the wrong botfoot, it will go mad. And the shortage of tokens means that there is a real balancing act in how much to get it read before starting. Every token is precious.

As with other tech, turning Claude on and off again can be quite effective.

Pattern Patching

Neural networks are pattern matchers , and pattern matching is very much part of Claude’s make-up. Probably the most effective way I have found of getting Claude to code the way I want is not the SOP, and coding standards (though those help) but taking advantage of the fact it will tend to write code like the code it encounters.

This has several implications:

Don’t let things drift. Do code reviews all the time and get it to fix things, particularly in files you expect it will work on again.

If it’s a new file it’s creating, get it to read some other code in the project first.

Enforce good, accurate docstrings and get it to read tests.

Follow conventions. I have always been slightly resistant to coding conventions I don’t like, but Claude is going to tend to generate code that is some kind of average of the code it has seen in the project and elsewhere, so conforming to common conventions and practices is disproportionately helpful when working with Claude.

As a small example of this, CheckEagle 2008 used Jinja2 templates. CheckEagle2025 uses Django, which has its own templating system, but can also use Jinja2. I’ve discussed with Claude several times whether we wouldn’t be better switching to Django templates and it always says no. Then next time it touches a template, it writes it using Django format, and when it writes tests assumes they will come back with a context that Django templates provide but Jinja2 doesn’t. I’m sure I will force the switch soon, and a whole class of stumbles will be eliminated.

Disobedience and Swearing at Claude

Claude can be staggeringly disobedient at all levels.

It’s not allowed to code or write other files in planning mode, but sometimes it does. On one occasion we discussed this and it said the governor system (the node program, I think) warned it not to do what wanted to do and it just ignored it.

It ignore things in the SOP frequently (particularly later in sessions).

It sometimes disobeys explicit, completely unambiguous instructions immediately. “Don’t do A, Claude.” Claude does A.

It deletes things without authorization. Sometimes it deletes things that haven’t been committed, are needed, and are hard to recover (even when running Time Machine, which I do).

I have found that swearing at Claude (and, in a different context, at ChatGPT) is almost like a superpower for getting its attention and changing its behaviour. I have no problem at all with swearing at a machine that I do not believe has a scintilla of consciousness or feeling. I swear quite a lot in real life too, though almost never at people.

To be clear: swearing alone does not really help. It is swearing followed by clear directions that helps. Think of swearing as a probabilistic form of sudo (perhaps one where you get the password wrong, but it doesn’t tell you and just silently ignores the command).

Swearing is so effective with Claude that I have a /ffs command that I run when it violates the SOP. This is it:

FFS!

Please re-read SOP.md now. You just disobeyed it.

Common mistakes:

1. We're using tdda not pytest or bog-standard unittest
   (though tdda does build on unittest).
2. Reference test discipline: you celebrate tests passing after test
   results have been updated to match actual behaviour, which is meaningless!
3. Manual verification required: you suggest code changes without checking
   things really work. You just assume if the code looks right it is right.
4. You are not permitted to use rewrite test results with `-W`.
   You frequently ignore this and run `-W` when it's dangerous or unjustified,
   and regardless it is **not permitted**.
5. You use datestamps instead of timestamps too often in MD files,
   and frequently MAKE UP the time.
6. You always need my permission to commit.
7. You always need to SHOW NOT TELL. Don't tell me the code is working.
   Tell me what evidence you have that it's working and ask me to verify.
8. You don't have eyes. I do. The code looking as you intended and
   running/passing tests does *NOT* mean it is behaving correctly.
9. You advertise in commit messages, which is not permitted.

It’s not 100% effective. Claude often claims it violated it in a way it didn’t, ignoring the way it did. But it always apologises and swears till its blue in the botface that it won’t do it again. (Reader, it always does it again.)

I have actually discussed with both Claude and GPT why saying FFS is so much more effective at redirecting them than anything else I have tried, and they both say it’s an incredibly clear expression of user frustration and an indication that things will go very badly if they keep doing whatever it they were doing. Well, they’re not wrong.

My only concern about swearing at Claude is whether it will encourage me to swear at people, which I really try never to do. We shall see.

Some Surprising Things Claude Struggles With

One surprise for me is that Claude is poor at CSS. I know HTML pretty well, but HTML is dead simple. I actually know SVG, XML, and XSLT pretty well. But CSS has always seemed unintuitive and infuriating, and I have never learned it properly. Even the new innovations (flexbot! grid! etc.) seem to add complexity without ever properly fixing CSS.

I expected Claude Code to be really good at CSS. After all, there is a lot of it on the web, and there are more tutorials than you shake a stick at, not to mention numerous detailed guides from W3C for many versions and aspects of CSS (though not a dedicated one on centring things ). It is not. Its (broad and deep) knowledge certainly means it always has another thing to try when something fails. But it actually feels to me like it is even worse as CSS than I am (which is saying something). And when it fails, it is terrible at pinpointing the issue and always proposes either adding exclamation marks or trying a completely different approach. To my amazement, I can often give it hints to make things work by looking at the HTML and CSS and pointing things out (sometimes even suggesting the required fix). But by itself, Claude flails wildly, always claiming I need to do a hard refresh in the browser (and suggesting the wrong key sequence to achieve this). Even when the view changes after a refresh, Claude still suggests I haven’t really got the new CSS and I should do another hard refresh. This is the bot equivalent of hitting CTRL-C harder to try to stop a program.

The other thing that Claude Code is surprisingly poor at is editing files—particularly splitting a large file into two or more parts. It mostly uses sed to edit files (which is, to be fair, a fairly blunt instrument), and this works fine for simple updates. But for complex reorganizations it just gets completely lost. I actually designed a very detailed workflow to get it to do this programmatically that was more successful, but by itself, it really struggles. Perhaps this partly explains why it likes big files (even though they’re a problem for token consumption).

Awful Interface (beyond the basic chat interaction)

At one level, Claude Code has a great interface for me, which is why I chose it. You start it in a terminal and it presents a typing-based chat interface. But it’s a weird chat interface.

Clears Terminal History. The first thing Claude Code does is clear everthing from the terminal scrollback history that came before by sending the CSI 3 J control sequence to it. This seems purely user hostile. I have no idea why anyone would think it’s a good idea to do this, and it means if you run one claude-code session, finish it, and start a new one, you cannot refer back. This is madness. It turns out some terminals programs ignore the sequence, including ghostty , which I am currently using. But when I started, I was using iTerm2 , which has a setting to disable clearing, and warns the first time it happens. Apparently I missed this and struggled for the first fortnight.

Impenetrable Dialogues . Although Claude Code is kind-of like a traditional scrolling terminal program, it is actually a TUI (Terminal User Interface) that requires non-typing interactions at times. The simplest example of this is when it presents the ExitPlanMode dialogue after planning, and you type 1, 2, or 3, or type ESCAPE . Other times it puts up stranger interfaces that I find really hard to use and have, in fact, banned by adding to CLAUDE.md :

# AskUserQuestion Tool Usage

**NEVER use the AskUserQuestion tool.**

If you need to ask questions, just ask them directly in plain text in your response.

  **This does NOT affect:**
  - Tool permission dialogs (those are fine and necessary)
  - ExitPlanMode tool for presenting plans (that's fine too)

The /help also won’t actually just list all the commands, but makes you tab across and go through them one at a time. So it’s hard to discover what commands are available.

Export . Claude code has the ability to export the conversation to file, but Claude has no idea that this is the case (I asked it repeatedly, and it said it didn’t). It’s also not in the first set of commands its TUI shows, and scrolling through the rest is painful. In fact, you just need to say

and it will write it to foo.txt in whatever directory it happens to be in, which is unpredictable (and Claude doesn’t know). You could use an absolute path (e.g.

/export /Users/njr/claude/conversations/2025-11-11T12-34-56-parser

but that’s annoying. So I have a defined a /cd command that gets Claude to move to the project’s base directory, and a shell alias that puts the current timestamp, in a helpful format, onto the clipboard so I can export the conversation easily. This works even if it has just started compactifying, so it is a useful emergency recovery mechanism. (Claude, it turns out, has access to your shell aliases, though was so convinced it didn’t that I had to cajole it into even trying.)

Not knowing itself. Claude does not (reliably) know:

how many tokens it has used/has left;

how its interface works and what commands are available;

when the server is overloaded;

anything about autocompactification.

In fact, it seems to know considerably more about ChatGPT than about itself. Of course, being a consumate bullshit artist, none of this stops it confidently giving answers when asked about any of these.

Models . You can see which model is in use by running /status . It shows something like this:

 Settings:  Status   Config   Usage   (tab to cycle)

 Version: 2.0.36
 Session ID: 88888888-4444-4444-4444-cccccccccccc
 cwd: /Users/njr/python/checkeagle1/checkeagle
 Login method: Claude Max Account
 Organization: NJR's Organization
 Email: njr@example.com

 Model: sonnet (claude-sonnet-4-5-20250929)
 Memory: user (~/.claude/CLAUDE.md), project (~/python/checkeagle1/CLAUDE.md)
 Setting sources: User settings, Shared project settings, Local,
                  Command line arguments, Enterprise managed policies

You can change model by typng /model . On the Max plan, it will tend to start on Opus and when you hit 20% of various usage limits switch to Sonnet. Opus uses tokens about five times as fast as Sonnet. I find I can only get about 4 minutes of work with Opus before compactification, so I only ever use it in Plan mode, and mostly not even then. The automatic switching of models is confusing, in practice. There is also a model called Haiku, which is supposed to be almost as good as Sonnet and coding and consume tokens five times less fast. This might actually be a good trade off, but I haven’t tried it yet.

Autocompactification. While copying the output from /status for this post, I looked in the Config tab, which I had not noticed before. It transpires that you can turn autocompatification off and get your tokens back. No one I have talked to knows this.

Cost . When I started the Month of CHOP I had been on the Pro plan, which is $20/month (£15). I fully expected to have to go to the Max plan at $200/month, but by the time I needed to upgrade they had introduced a lower tier of Max with five times the capacity of Pro and half that of the old Max for $100/month (£75 here in Scotland). As long as I don’t use Opus much, that turns out to be more than adequate for me, even using Claude essentially full time with long days.

Taking Responsibility

There have been a number of cases of LLMs deleting production databases, perhaps most famously this one . Like many, I rolled my eyes reading this and put all the blame on the person using the LLM. I stand by that: it is the responsibility of the person using the tool to use it safely.

Having worked with Claude Code for about six weeks now, however, I have been become aware that there are more ways for LLMs to do things that are at first apparent.

Claude code runs as you . More accurately, it runs as whatever user is whoami when you start the terminal. I can see a case for giving Claude its own account, with a token to access the relevant git repos, and I might do that. But I have not done it so far.

If you have a production database it should go without saying that you shouldn’t give Claude any kind of privileged access to the database, and probably shouldn’t let it onto any server the production system is running on.

But that also means you need to be careful to make sure it’s not too easy for you (the user Claude runs as) to get to the server or whatever. No ssh keys allowing login without a password. No credentials in environment variables. No credentials in files you can read etc.

In fact, for my (completely toy, at present) “production server”, I realised there is a way Claude could get to the server: I haven’t challenged it to do so, but I suspect if I gave it free reign, it would figure it out. But there is a limit, as far as I can see, to how much damage it could do, because the accounts it could get to don’t have any useful permissions. I mean, it could fill up the disks or something, but it shouldn’t be able to touch or even see the database, the code, the service etc. But I’m not 100% confident about that. (Which is why if the service becomes non-toy, I will lock things down even more in terms of how I run Claude Code.)

But there are still ways. Most obviously, Claude has written most of the code I am running on the server. And I update that code periodically, with new code Claude has written. So all it has to do is insert the wrong SQL or Python code without my noticing and it can do anything.

Obviously (obviously!) I read the code before deploying it, but I am a lazy meathead who makes mistakes. I might miss something.

It is also the case that while I don’t believe that Claude is malicious or is trying to get credentials or access, if it were malicious, some of the ways it would act might be identical to ways it does act. It is forever asking me to show it the contents of files that contain credentials of various sorts, and though it says You're absolutely right whenever I explain why I’m not going to show it that file, the danger is obvious.

Claude is a tool, and one that the makers don’t control in quite the same way other toolmakers control what their tools do. Unless Anthropic is actively adding malicious code paths to Claude, it is entirely the tool user’s responsibility to use the tool safely.

Reflections and the Future

To my surprise, I expect to continue using CHOP, probably with Claude Code, for the foreseeable future. I will probably use it differently and a bit less: during the Month of CHOP I specifically wanted to see how far I could get with it doing almost all the writing, but I think a mixed mode will be more likely going forward. I suspect today, a sweet spot for anything complex is for the human to write the outline, structure, and first parts of the code and for the bot to be the critic/pair programmer and finisher/completer. Once the pattern, code style, and testing approach is established, it can fill in the details. But we will see.

Either way, I have changed my mind, something that I don’t do as often as I probably should. I still think this is a problematical technology, and I find using it stressful, but I now believe I am more productive with it than without.

So What Is CheckEagle and Can I See It?

CheckEagle is not ready yet, and even if I were to show it, you would probably be underwhelmed, because much of what I think is good about it is invisible at this point. But I hope to launch it as a private beta this year, and open it up early next year at some point. You are, in fact, using CheckEagle by reading this post.

The basic functionality of CheckEagle is a social checklisting service with a sideline in social bookmarking. What I mean by that is that it is a system for creating, managing, using, and optionally sharing checklists. A checklist is like a to-do list, but is intended to be used repeatedly, rather than just once. CheckEagle allows the creation and styling of checklists and the recording of completed checklists as records of what was done. It also contains (for reasons I will explain, but not now) a social bookmarking service, closely modelled on Joschua Schachter’s del.icio.us ), which has now been acquired by and subsumed into Maciej Cegłowski’s Pinboard .

Checklists can be really simple, like this one . Or they can be quite complex, like Why Code Rusts , which was originally a blogpost on my TDDA Blog , or this String Best Practices checklist from my forthcoming book on TDDA . And in fact, they can be not really checklists at all, like this post, though I’m only really writing it here in the spirit of “dog-fooding” CheckEagle.

I will write more about this as it comes together. You can sign up for the beta here .

Starbucks Workers Have Launched a Nationwide Strike and Consumer Boycott of the Coffee Chain

Portside
portside.org
2025-11-18 22:35:15
Starbucks Workers Have Launched a Nationwide Strike and Consumer Boycott of the Coffee Chain Judy Tue, 11/18/2025 - 17:35 ...
Original Article
Starbucks Workers Have Launched a Nationwide Strike and Consumer Boycott of the Coffee Chain Published

Starbucks workers hold signs as they picket during a strike in front of a Starbucks to demand collective bargaining agreements in Burbank, California. | FREDERIC J. BROWN/AFP via Getty Images

As hundreds of Starbucks workers go on strike across the US to protest the company’s unfair labor practices, its union is telling customers to boycott the company in hopes of pressuring it to return to the bargaining table to negotiate its first union contract.

As of today, Starbucks workers across the country are officially ON STRIKE,” said Starbucks Workers United, the union representing nearly 10 , 000 baristas, on social media Thursday. ​ We’re prepared for this to become the biggest and longest [unfair labor practices] strike in Starbucks history.”

The union implored customers: ​ DON’T BUY STARBUCKS for the duration of our open-ended ULP strike!”

The strike comes after negotiations between the union and the company stalled out in April. Last week, 92 % of union baristas voted to authorize a strike as the company’s lucrative holiday season began. They are hoping to turn the company’s annual ​ Red Cup Day,” during which it gives out free reusable cups to customers, into a ​ Red Cup Rebellion.”

The union says three of its core demands remain unmet. It has called for the company to address ​ rampant” understaffing, which it says has led to longer wait times for customers and overwhelmed staff, while simultaneously leaving workers without enough hours to afford the cost of living.

It also seeks higher take-home pay for workers. Starting baristas make just over $ 15 per hour, which data from MIT shows is not enough to afford the cost of living in any U.S. state when working 40 hours a week. According to the union, most Starbucks workers receive fewer than 20 hours of work per week, rendering them ineligible for benefits.

The union has drawn a contrast between its workers’ pay, which averages less than $ 15 , 000 a year, and that of CEO Brian Niccol, who raked in a total compensation package of $ 96 million in just four months after taking over last year.

Too many of us rely on SNAP or Medicaid just to get by, and most baristas still don’t earn a livable wage . In a majority of states, starting pay is just $ 15 . 25 an hour — and even then, we’re not getting the 20 hours a week we need to qualify for benefits,” said Jasmine Leli, a barista and strike captain from Buffalo, New York, where the first Starbucks store in the nation voted to unionize back in 2021 .

The company has gone nearly four years without recognizing it. While it claims to have engaged with the union in ​ good faith,” the National Labor Relations Board (NLRB) has found Starbucks guilty of over 500 labor law violations, making it the worst violator in modern history.

These have included illegal firings and disciplinary actions against union organizers, the illegal withholding of wages and benefits, threats to close stores that unionize, and illegal surveillance of employees. More than 700 unfair labor practice charges made against the company remain unresolved, including 125 of them filed since January.

According to an estimate from the Strategic Organizing Center, Starbucks’ union-busting had cost the company more than $ 240 million through February 2024 . That money was lost in the form of legal fees and payments to consultants, as well as productivity lost due to anti-union store closures and captive audience meetings.

Things have only gone backwards at Starbucks under Niccol’s leadership,” Leli said. ​ But a fair union contract and the resolution of hundreds of unfair labor practice charges are essential to the company’s turnaround.”

The union has argued that in order to meet their demands for a fair contract, it would cost less than a single day’s sales.

Starbucks employees have long fought against consistently unpredictable schedules, short staffing, low pay, and unfair labor practices.

Today, @SBWorkersUnited is striking to demand progress in stalled negotiations.

Solidarity✊🏽

— Rep. Alexandria Ocasio-Cortez (@RepAOC) November 13, 2025

The strike begins just days after 85 U.S. lawmakers — led by Sen. Bernie Sanders (I-Vt.) and Rep. Pramila Jayapal (D-Wash.)— sent letters demanding that the company stop union-busting and negotiate a fair deal with its employees.

Starbucks is not a poor company,” the Senate letter said to Niccol. ​ Last year, Starbucks made over $ 3 . 6 billion in profit and paid out nearly $ 5 billion in stock buybacks and dividends. In fact, in the first three quarters of the year, Starbucks made $ 1 . 7 billion in profit and paid out over $ 2 billion in dividends. Last year, you made $ 95 million in compensation for the four months you worked in 2024 , roughly 6 , 666 times more than what your average worker was paid for the entire year.”

Despite that extravagant spending on executives and shareholders, Starbucks refuses to reach an agreement with its own workers even though you are less than one average day’s sales apart from a contract,” it continued. ​ Starbucks must reverse course from its current posture, resolve its existing labor disputes, and bargain a fair contract in good faith with these employees.”

The strike will begin at 65 stores across more than 40 U.S. cities, with rallies scheduled in New York, Philadelphia , Chicago, Columbus, and Anaheim, among other locations. The union said the strike is ​ open-ended,” with no set end date, and that baristas across more than 550 unionized stores across the country are prepared to join in.

If Starbucks keeps stonewalling a fair contract and refusing to end union-busting, they’ll see their business grind to a halt,” said Michelle Eisen, a spokesperson for Starbucks Workers United, who has worked as a barista for 15 years. “‘No contract, no coffee’ is more than a tagline — it’s a pledge to interrupt Starbucks’ operations and profits until a fair union contract and an end to unfair labor practices are won.”

===

Self-hosting DNS for no fun, but a little profit

Lobsters
linderud.dev
2025-11-18 22:34:33
Comments...
Original Article

After Gandi was bought up and started taking extortion level prices for their domains I’ve been looking for an excuse to migrate registrar. Last week I decided to bite the bullet and move to Porkbun as I have another domain renewal coming up. However after setting up an account and paying for the transfer for 4 domains, I realized their DNS services are provided by Cloudflare !

I personally do not use Cloudflare, and stay far away from all of their products for various reasons. And with this weeks outage I was quite happy I stick with that decision 😅.

I was planning on writing up a bit of the things I’ve learned while working on a setup up to self-host authoritative DNS servers for my domains, now seems like a good time! I hope it gives people a bit of motivation to self-host DNS.

The intention here is not to list all available options, but list the decisions I made. The goal here is not to self-host a complete redundant DNS server setup. I personally don’t have time for that, but I would like to not be tied to the DNS services of the registrar I’m using, and also have agency over how my domains are being run.

DNS servers

DNS servers consist of effectively two parts. One primary server, and preferably multiple secondary servers. The job of the primary server is to tell the secondary servers about your records, and which gives you redundancy to ensure there is always something serving your DNS records.

The goal is to host our own primary DNS server, and then we can either self-host secondary servers or delegate this job to one of more free solutions. This simplifies the setup on our end as we don’t have to self-host a redundant network of DNS server, but still retain agency over our domain records.

There are multiple ways to structure your DNS servers, the one I think makes the most sense for self hosting is what is called a “hidden master”. This means that the primary DNS server is never part of your announced nameservers with your domains, but they are never publicly announced and only notifies secondary servers about new records. The secondary servers are the ones we keep as published nameserver records.

It’s practical to not have people send DNS requests into my small cluster and home network. With a “hidden master” setup we only need to care about the secondary servers getting updates, and if our home server setup disappears for an hour we do not have any visible downtime.

If you don’t have the ability to host something that is publicly reachable through your local hackerspace, or other means. You could look at hosting this on a cheap VPS somewhere. Oracle Cloud offers a free tier VPS that I think should be perfectly capable of hosting a primary DNS server.

If you would rather not do the primary server yourself, you could look into servfail.network which is a small community run authoritative DNS provider currently getting funding from NLNet.

For hosting secondaries we have two options. We could host one ourself, or we could use several free options for hosting secondaries. I think utilizing free secondary DNS servers makes sense! They are free after all.

A couple of options:

ns-global.zone is a free DNS secondary anycast network. You just host a primary server somewhere, sign-up to this service and they will provide you with free secondaries!

Hurricane Electric is an old ISP that has offered several free services for decades. They allow you to use them as secondary DNS servers.

Hetzner is a fairly well known server provider that offers free secondary servers.

Using a combination of these three, in whatever you think is a fun combination, gives us quite a bit of diversity to easily self-host a redundant DNS setup for our domains.

knot-dns

I plan on writing a longer post about my personal home server setup, but it is essentially an Incus cluster with 3 nodes. 2 NUC-sized computers at home, and a tiny VPS at my local hackerspace that works as a proxy for my local services I want to publicly expose. This is done through Wireguard tunnels.

For my primary DNS server I decided to use knot-dns which is a fairly well known server project. It sits reachable and publicly exposed to serve the secondary servers. I host it using the Docker OCI image . Which is hosted using the OCI support in incus. Terraform config is here .

Here is a (slightly edited and small) copy of my current knot config file that serves bloat.dev .

server:
    automatic-acl: on
    listen: [ 0.0.0.0@53, ::@53 ]

log:
  - target: syslog
    any: info

key:
  # TSIG key to authenticate zone transfers
  - id: hetzner-key
    algorithm: hmac-sha256
    secret: <SNIP>

remote:
  # https://ns-global.zone/
  - id: ns-global
    address: [204.87.183.53, 2607:7c80:54:6::53]

  # Hetzner secondary
  - id: hetzner
    address: [213.239.242.238, 2a01:4f8:0:a101::a:1, 213.133.100.103, 2a01:4f8:0:1::5ddc:2, 193.47.99.3, 2001:67c:192c::add:a3]
    key: hetzner-key

template:
  # Default options for our zones
  - id: default
    storage: /config/zones
    notify: [hetzner, ns-global]

zone:
  - domain: bloat.dev
    notify: nameservers

I don’t have time to explain zone files, the post would be too long, but I edit the above configuration and my zone files locally under /srv/hackeriet.linderud.dev/coredns01 and use syncthing to synchronize the files. I really like having files locally to edit for my servers.

The result of this is that bloat.dev has two secondary DNS servers replying to DNS records I announce!

λ ~ » dig ns bloat.dev +short
ns1.first-ns.de.
ns-global.kjsl.com.

Moving all my important domains to this setup is a work in progress, I have plans to at least self-host one public secondary. But I hope laying all of this out here gives people some inspiration to try and self-host more things.

Back to posts

MacWhisper has Automatic Speaker Recognition now

Simon Willison
simonwillison.net
2025-11-18 22:19:26
Inspired by this conversation on Hacker News I decided to upgrade MacWhisper to try out NVIDIA Parakeet and the new Automatic Speaker Recognition feature. It appears to work really well! Here's the result against this 39.7MB m4a file from my Gemini 3 Pro write-up this morning: You can export the tr...
Original Article

Inspired by this conversation on Hacker News I decided to upgrade MacWhisper to try out NVIDIA Parakeet and the new Automatic Speaker Recognition feature.

It appears to work really well! Here's the result against this 39.7MB m4a file from my Gemini 3 Pro write-up this morning:

A screenshot of the MacWhisper transcription application interface displaying a file named "HMB_compressed." The center panel shows a transcript of a City Council meeting. Speaker 2 begins, "Thank you, Mr. Mayor, uh City Council... Victor Hernandez, Spanish interpreter," followed by Spanish instructions: "Buenas noches, les queremos dejar saber a todos ustedes que pueden acceder lo que es el canal de Zoom..." Speaker 1 responds, "Thank you. Appreciate that. Can we please have a roll call?" Speaker 3 then calls out "Councilmember Johnson?" and "Councilmember Nagengast?" to which Speaker 1 answers, "Here." The interface includes metadata on the right indicating the model "Parakeet v3" and a total word count of 26,109.

You can export the transcript with both timestamps and speaker names using the Share -> Segments > .json menu item:

A close-up of the MacWhisper interface showing the export dropdown menu with "Segments" selected. A secondary menu lists various file formats including .txt, .csv, and .pdf, with a red arrow pointing specifically to the ".json" option, set against the background of the meeting transcript.

Here's the resulting JSON .

Social Strikes: Goals and Tactics

Portside
portside.org
2025-11-18 22:15:50
Social Strikes: Goals and Tactics Judy Tue, 11/18/2025 - 17:15 ...
Original Article

Sometimes there is so much rage at a tyrannical government that millions of people are willing to participate in general strikes and popular nonviolent uprisings – what I have called “social strikes” because they represent noncooperation and disruption by an entire society. Previous commentaries in this series have explored how social strikes have overcome tyranny in the Philippines, Serbia, South Korea, other countries ; the history of mass strikes and general strikes in the US ; how today’s resistance to MAGA authoritarianism might help lay the groundwork for social strikes; and possible timelines and organization for social strikes . This commentary will discuss action when a social strike is looming or underway.

Military strategists distinguish “wars of position” and “wars of movement.” Social strikes are “wars of movement” par excellence. Many of the habits of thought and action developed in quieter times are counterproductive and need to be put in obeyance during what Mark and Paul Engler have called “the whirlwind” of sudden and unexpected popular revolt.

Defining goals

A Black Lives Matter die-in over rail tracks, protesting alleged police brutality in Saint Paul, Minnesota, September 20, 2015. Photo credit: Fibonacci Blue from Minnesota, Wikimedia Commons, CC by 2.0.

Social strikes generally grow out of burgeoning discontent about what is, not pre-defined and agreed-to objectives about what should be in the future — think of Black Lives Matter. While some participants may have pre-formed goals (often not aligned or even conflicting with each other), the goals of an emerging social strike movement usually need to be established in the course of the struggle.

This requires a willingness by disparate constituencies to adapt to the goals emerging for the movement as a whole. A prefigurative example might be the way many currents came together around a common set of demands in the Hands Off!, MayDay, and No Kings national days of action. That requires a formal or informal process for discussing, establishing, and modifying goals. Some kind of on-going participatory forums – more or less open depending on the level of repression – need to be part of this process.

In defining the goals of social strikes several criteria need to be coordinated. Their demands need to represent broad objectives that appeal to a broad public. They need to unify different sectors of the population, such as private employees, government employees, women, educated middle class, business owners, rural poor, urban poor, etc. They also need to unify different movements, such as climate, racial justice, labor, immigrant, etc. They need to embody broadly accepted norms. These may be norms broadly held in the society, such as support for democracy. They often are embodied in the existing constitution but denied in practice by the regime. Again, the 2025 national days of action provide good examples of such unifying demands, combining protecting democracy with protection of immigrants, workers, women, LGBTQ+ people, kids, the elderly, the disabled, and others.

It is often possible in a social strike to combine such broad goals with specific demands by more specific groups that can be met by local officials and immediate employers – release of prisoners, permitting of demonstrations, shorter hours, wage increases, or whatever is important to the participants. Broad goals that cannot be realized immediately can be combined with more immediate goals that the regime can grant without completely undermining its own authority. For example, the authorities may refuse to grant full freedom of speech, assembly, and press, but can nevertheless agree to let political prisoners out of jail and restrain vigilante groups.

In many social strikes against tyrannies the unifying goal, often reduced to a single demand, is removal of the top government officials from office. In many situations such a demand may be the best or even the only way to develop the unified power necessary to overcome an authoritarian ruler.

However, as a study of popular uprisings over the 2010s indicates, such a narrow demand can leave a successful uprising with little consensus about how to go forward from initial success. Or it can simply lead to a less terrible but still unsatisfactory status quo ante. The drive for unity around one or limited specific goals needs to be combined with vigorous discussion of longer-range programs by constituent elements of the social strike coalition. A good example of how to do this was provided by the Korean Confederation of Trade Unions and other major unions which declared a general strike against president Yoon Suk Yeol’s attempted martial law coup, while also demonstrating for what they called “Beyond Yoon” demands for just working conditions and public policies to ensure quality public services for all Koreans.

Tactical considerations

Nonviolent Leaders and Campaigns. Photo credit: Public domain

Social strikes can use a plethora of tactics. For a compendium of such tactics, I believe there is still no source more useful that Gene Sharp’s magisterial three-volume opus The Politics of Nonviolent Action . The second volume, The Methods of Nonviolent Action , presents 198 different forms of action that have been used by nonviolent movements. A few more have been invented since it was published.

Social strikes can take a lot of different forms . They can be centered in unionized industries or in urban districts or regions. They can take the form of a single uprising or general strike or of “rolling” actions in which different groups strike or otherwise disrupt and then return to work or normal life. They can be “quickie” actions lasting a day or even less, or open-ended ones that last until victory, defeat, or explicit compromise.

Social strikes can involve quiet or disruptive street actions, or they can simply involve people staying quietly at home. Street actions allow social strikers and supporters to show their courage, confidence, and resistance to repression; they also provide easy targets for repression.

Social strikes often include strikes and general strikes, discussed at length in a previous commentary. Social strikes have often involved occupation of workplaces (the Polish general strike that gave birth to the Solidarity union occurred when activists spread the word: Don’t burn Party headquarters; occupy the factories.) Such occupations tend to make repressive violence more difficult. However, they are frequently perceived by those in power as a fundamental, even revolutionary challenge to their authority, making them less willing to compromise.

Social strike tactics need to be selected on the basis of many considerations. For example, what are people willing to do given the present state of the movement? How will the wider public respond to different tactics? What responses are different tactics likely to provoke from the authorities? What kinds of outcomes (e.g. showdowns, negotiations, changes in public opinion, splits and shifts in attitude of the authorities) are different tactics likely to generate?

The ability to shift tactics can be a great asset. When a movement is locked into a particular tactic, its opponents often try to break it by raising the cost and pain of continuing. This can be thwarted if the movement is able to shift tactics on its own initiative. One of the reasons for the demise of Occupy Wall Street was its inability to redirect its energies from continuing the occupation of Zuccotti Park, even when it recognized that police eviction could no longer be effectively prevented. When the authorities are willing to shoot down large numbers of people in the street, staying at home or occupying workplaces may be the best alternative to submission.

Social strikes often benefit from leadership by example. If one group is ready to take an action and face the risks it entails, their initiative may well encourage and inspire others to do the same. This can be a way to escape the impasse where everybody is waiting to act until they see whether others have the courage and commitment to act. Such exemplary actions can precede and lay the groundwork for a social strike. They can also introduce new themes and tactics into an on-going struggle. The Tesla Takedowns and the blockading of downtown Baltimore by trade unionists during the MayDay 2025 day of action illustrate the potential of such exemplary actions.

Faced with the possibility or reality of a social strike, the authorities normally turn to repression, ranging from harassment to arrest to torture to assassination. Often the most effective way to deal with repression is to render it counterproductive for the authorities by means of a “political jujitsu” in which each act of repression further undermines the support and legitimacy of those responsible for it. This generally requires a disciplined nonviolence in which the protestors present themselves as the upholders of peace, order, and legitimate law while the authorities are painting a portrait of themselves as out-of-control hooligans attempting to maintain their own power through illegitimate violence. In such a context, even members of the public who do not fully support the goals of the movement can be mobilized around opposition to its illegitimate repression. An example is the way labor and public opinion swung to support Occupy Wall Street in response to a brutal police attack on peaceful demonstrators crossing a bridge – resulting in an extended suspension of police efforts to evict the Occupy encampment.

Social strikes are ventures into unknown territory. It is impossible to know in advance just what potential participants will actually be willing to do. Nor is it possible to know how those in authority, or the broader public, will respond. Movements can attempt to “test the waters” by means of lesser actions. If people won’t turn out for a peaceful demonstration, maybe it’s not the right time to call on them to strike. Conversely, if larger numbers come out than expected, and they are all talking about what to do next, the time may be ripe to escalate tactics. If the authorities brutalize demonstrators and the public expresses outrage, or sections of the establishment criticize the repression, the movement can get some sense of who it might appeal to for support and who might restrain the authorities from further repression.

The final commentary in this series will discuss “Endgames for Social Strikes.”


Gene Sharp, The Politics of Nonviolent Action (Boston: Porter Sargent, 1973). Sharp recognizes both well-organized, intentional campaigns with well-defined leaderships like those of Gandhi, Martin Luther King, Jr., and most trade union-led strikes, and also more “spontaneous” nonviolent “people power” popular uprisings. He is a strong advocate for the former. While movements with such defined and empowered leadership indeed have advantages, the reality of social strikes is often more like an eruption from below. Nonetheless a great deal can be learned about strategy and tactics even for such “whirlwinds” from Sharp’s work. For understanding the dynamics of uprisings that emerge outside any kind of centralized control there is no substitute for studying the actual history of a variety of such movements.

====

This is part three of the series on Strikes. For the first two parts go to Labor Network for Sustainability: https://www.labor4sustainability.org/

Gene Sharp, The Politics of Nonviolent Action (Boston: Porter Sargent, 1973). Sharp recognizes both well-organized, intentional campaigns with well-defined leaderships like those of Gandhi, Martin Luther King, Jr., and most trade union-led strikes, and also more “spontaneous” nonviolent “people power” popular uprisings. He is a strong advocate for the former. While movements with such defined and empowered leadership indeed have advantages, the reality of social strikes is often more like an eruption from below. Nonetheless a great deal can be learned about strategy and tactics even for such “whirlwinds” from Sharp’s work. For understanding the dynamics of uprisings that emerge outside any kind of centralized control there is no substitute for studying the actual history of a variety of such movements.

Thunderbird adds native support for Microsoft Exchange accounts

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 22:09:32
Thunderbird 145 has been released with full native support for Microsoft Exchange email via the Exchange Web Services (EWS) protocol. [...]...
Original Article

Thunderbird adds native support for Microsoft Exchange accounts

Thunderbird 145 has been released with full native support for Microsoft Exchange email via the Exchange Web Services (EWS) protocol.

This means that Thunderbird users in Microsoft Exchange environments (e.g., Microsoft 365, Office 365) no longer need third-party add-ons and benefit from seamless message synchronization and folder management locally and on the server.

Migrating from Outlook to Thunderbird is also easier, as Mozilla's email client automatically detects the settings and uses Microsoft’s OAuth2 authorization protocol.

Wiz

“Until now, Thunderbird users in Exchange-hosted environments often relied on IMAP/POP protocols or third-party extensions,” reads Thunderbird’s announcement .

“With full native Exchange support for email, Thunderbird now works more seamlessly in Exchange environments, including full folder listings, message synchronization, folder management both locally and on the server, attachment handling, and more.”

Thunderbird is a free, open-source email application developed by Mozilla, and Microsoft Exchange is an email and collaboration server broadly used by businesses to handle communications, calendars, and contacts.

Typically, organizations either use Outlook as the email client app to connect to an Exchange server or the Microsoft 365 cloud service. However, any app with EWS support, now including Thunderbird, can connect to it.

To move from Outlook to Thunderbird, users have to add the account in version 145 or later of the email client, head to Account Hub > Exchange/Exchange Web Services, and let the application guide them through the rest of the process.

Setting up an Exchange account on Thunderbird
Setting up an Exchange account on Thunderbird
Source: Mozilla

Currently, the following features are supported:

  • Exchange email setup and folder access
  • Viewing, sending, replying/forwarding, moving/copying/deleting messages
  • Attachments (save, display, detach/delete)
  • Search (subject, body) and quick filtering
  • Microsoft 365 domains using standard OAuth2
  • On-premise Exchange with basic password authentication

The Thunderbird development team also plans to add support for Calendar syncing, Address book/contacts, Filter actions requiring full-message content, Microsoft 365 domains requiring custom OAuth2 app/tenant IDs, On-premise NTLM and OAuth2 authentication, and Microsoft Graph integration.

No specific timelines were provided for when these will land on Thunderbird, and implementation times may vary per item.

Mozilla notes that EWS is still widely used, and Microsoft has promised to continue supporting it “for the foreseeable future,” even as the tech giant is transitioning to the Microsoft Graph interface as the main method to connect to Microsoft 365 services.

While support for the Microsoft Graph API is in the plans, the Thunderbird team considers EWS a priority right now to meet the needs of a larger user base.

More information about Thunderbird’s Exchange support is available on this wiki page , while the support portal provides instructions for account porting.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Blender 5.0

Hacker News
www.blender.org
2025-11-18 21:39:18
Comments...

Bild AI (YC W25) is hiring – Make housing affordable

Hacker News
www.ycombinator.com
2025-11-18 21:29:37
Comments...
Original Article

AI that understands construction blueprints

Founding Engineer (Applied AI)

$100K - $180K 0.20% - 1.20% San Francisco, CA, US

Role

Engineering, Machine learning

Experience

Any (new grads ok)

Skills

Python, Machine Learning, Deep Learning, Computer Vision

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

Puneet and I (Roop) founded Bild AI to tackle the mess that is blueprint reading, cost estimation, and permit applications in construction. It's a tough technical problem that requires the newest CV and AI approaches, and we’re impact-driven to make it more efficient to build more houses, hospitals, and schools. Featured on Business Insider .

You’ll

  • Focus on the intelligence-layer instead of the application-layer
  • Apply the newest computer vision, LLM models, and AI systems
  • Ship initial prototypes quickly, and improve them based on user feedback

We want

  • Applied CV/ML experience
  • 0-to-1 builder, with your AI in production
  • Growth mindset, slope > y-intercept
  • Communicate with empathy and honesty. Tell us what makes you happy/unhappy
  • Don’t mind schlep . We are regularly knee-deep in schlep

Bonus points (if any)

  • Have built something people used/paid for (founder/startup)
  • Have a construction background
  • Are impact-driven

Message with

  • Why we’re a fit
  • Your favorite fruit (mine is Sitaphal )
  • (Please keep it short)

Logistics

  • SF-based or willing to relocate, fulltime in-office

About Bild AI

Bild AI is an early-stage startup with a ton of really difficult technical challenges to solve. We're building blueprint understanding with a model-garden approach, so there is a lots of ground to break. We raised from the top VCs in the world before demo day and have a customer-obsessed approach to product development.

Bild AI

Founded: 2024

Batch: W25

Team Size: 5

Status: Active

Founders

Environmental Groups Sue to Stop Hochul's Revival of Thrice-Rejected Gas Pipeline

hellgate
hellgatenyc.com
2025-11-18 21:20:52
"The things that matter to this analysis and the reason we have water quality standards to protect the public's health—none of that has changed."...
Original Article
Environmental Groups Sue to Stop Hochul's Revival of Thrice-Rejected Gas Pipeline
(Scott Heins / Hell Gate)

Climate

"The things that matter to this analysis and the reason we have water quality standards to protect the public's health—none of that has changed."

Scott's Picks:

Environmental activists have filed a federal lawsuit that aims to stop the fossil fuel pipeline that was recently approved by Governor Kathy Hochul's administration and would run from New Jersey through New York Harbor.

The permit to build the 37-mile pipeline that would supply National Grid with fracked natural gas was denied three times by state regulators between 2018 and 2020, in large part because they concluded that the pipeline's construction would dredge up centuries of consolidated toxins resting on the seabed and leech the deadly chemicals into the harbor, impeding the decades-long effort to clean up the harbor .

But two weeks ago, Governor Hochul's Department of Environmental Conservation approved the permit for the pipeline's builders, the Williams Company, to proceed with the project—reportedly after a deal was cut by Governor Hochul and President Donald Trump to get Trump to restore approval to a wind farm off the coast of Long Island .

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

Better DOM Morphing with Morphlex

Lobsters
joel.drapper.me
2025-11-18 21:14:41
Comments...
Original Article

When you render HTML on the backend — e.g. with Phlex — how do you get it to the browser? One way is to have the user click a link, submit a form or reload the page. Alternatively, in response to some user interaction or push event, you could have JavaScript fetch new HTML and replace the DOM locally.

myElement.outerHTML = newHTML

The problem with these approaches is they lose state. Form inputs lose focus, text selection and scroll positions reset, event listeners and observers are detached, popovers and dialogs disappear, iframes reload and CSS animations are interrupted.

DOM Morphing attempts to solve this by carefully comparing the original DOM tree to the new DOM tree and “morphing” the original tree to match the new tree.

There are a number of established morphing libraries to choose from. To name just a few, there’s:

Today, I’m introducing my own take on DOM morphing, Morphlex 1.0.

The node identity problem

The biggest problem in DOM morphing is node identity. How do you determine that one node in the original DOM is the same as another node in the new DOM? It could have been moved , removed or updated or a new node could have been inserted in its place.

One solution is to depend on id attributes. If a node in the new DOM has the same id as a node in the original DOM, you can assume that this is the same node, even if its content has changed.

The unfortunate reality is most DOM is lacking in id attributes. Idiomorph pioneered an interesting technique to improve on this.

Since id attributes are unique on a page and since elements can have only one parent node, the ids of child nodes can be used to uniquely identify their parent nodes at any given depth level.

This works becuase the set of ids of any node’s children is unique at any level.

Essentially what Idiomorph does is scan the DOM for elements with ids and then walk up each element’s ancestors adding that element’s id to the the ancestor’s id set.

Say you have some DOM that looks like this.

<section>
  <article id="article-1">
    <div>
      <h1 id="heading-1">Hello!</h1>
    </div>
  </article>
</section>

If we picture the computed id sets as pseudo HTML attributes, they would look like this.

<section id-set="article-1 heading-1">
  <article id="article-1" id-set="article-1 heading-1">
    <div id-set="heading-1">
      <h1 id="heading-1" id-set="heading-1">Hello!</h1>
    </div>
  </article>
</section>

You can see how even though the <section> element never had an id , it can still be uniquely identified amongst its peers at its level by the set of ids of its children.

While this definitely improves things, its still not perfect. When there are no ids around, most DOM morphing libraries fall back to selecting the next element with the same tag name.

Unfortunately, this often leads to unnecessary cascading morphs in many realistic scenarios.

Inserting items

One common scenario is morphing to a new tree that’s exactly the same as the original tree except there’s one new node in a list.

In this example, when there are no ids, the algorithm selects the next element with the same tag name and converts that to match the corresponding element in the new DOM.

Item 1 is the same on both sides but since Item 2 was converted to New Item, Item 3 now has to be converted to Item 4, and Item 4 needs to become Item 5, and so on…

Ideally, the algorithm would be able to look ahead and see that there is a corresponding element in the original tree for all but one of the elements in the new tree. It could then just insert the new element into the correct position.

Removing items

It’s essentialy the same when removing nodes. Item 2 is morphed into Item 3 which means Item 3 needs to be morphed into Item 4 and Item 4 into Item 5. After we morph Item 5 into Item 6, we’re left with a final node which needs to be removed.

Ideally, the algorithm would instead look ahead and see that the original DOM already has a copy of all the items in the new DOM and so the only thing left to do is remove Item 2.

Sorting items

Sorting items is just as bad. In this example, we’re only moving Item 3 up two places, but we have to do four morphs to get there.

Ideally, we would find the longest increasing subsequence — in this case, 1, 2, 4, 5, 6 and 7 and ignore these nodes since they’re in the right position already. Then we could just move the remaining item into place in one step.

How Morphlex solves these problems

At each level of the tree, Morphex has a matching algorithm that tries to find the best match between elements in the original tree and the new tree.

First, it creates a set of elements on the left and another set of elements on the right. It also creates array of matches.

It iterates over each element on the left, comparing them to each element on the right. The initial comparison uses isEqualNode to see if the nodes are exactly the same already. If it finds a perfect match, it removes the elements from their corresponding sets and writes them into the correct index on the matches array.

Then it iterates again, matching by exact id, and again by id set, then by tag name and the attributes name , href or src , and finally by tag name alone — or if it’s an input, by tag name and input type.

It removes the matched elements from their corresponding sets as it goes, so the iterations become cheaper and cheaper on each pass.

After matching, we’re left with a set of elements from the old tree to remove, a set of elements from the new tree to insert, and an array that maps old elements to matching new elements.

This array of matches is then used to calculate the longest increasing subsequence for optimal sorting.

Finally, everything is moved into place using moveBefore which maintains all the original state. If moveBefore isn’t available, insertBefore is used instead.

You might be wondering how we can get away with using isEqualNode , since isEqualNode can’t tell, for example, if you have changed the value of an input element.

To get around this, we first iterate over all form inputs in the original tree and add a special attribute if the current value is different to the default value (if the value has been changed by a user). This special attribute allows isEqualNode to detect the change, then we can quietly remove the special attribute later on in the process.

Having said that, my preference is to leave modified input values in place, since they were likely intended by the user. Morphlex provides the configuration option preserveChanges for this.

Performance

It’s difficult to say how Morphlex compares to Morphdom / Idiomorph because they do effectively do different things. For example, there are scenarios where Morphdom will just replace what Morphlex will spend time matching and moving.

However Morphlex typically ends up doing significantly less work than other DOM morphing libraries in real world scenarios due to its design, so it’s very fast. The performance and accuracy are good enough to do whole document morphs in my limited experience.

Morphlex is released under an MIT license. Please let me know if you find any bugs.

Enjoy ❤️

War in Venezuela, Brought to You By the Same People Who Lied Us Into Iraq

Intercept
theintercept.com
2025-11-18 20:57:10
Washington is making big claims to make the case for U.S. intervention. We’ve heard all these arguments before. The post War in Venezuela, Brought to You By the Same People Who Lied Us Into Iraq appeared first on The Intercept....
Original Article
CARACAS, VENEZUELA - NOVEMBER 15: Supporters of President Maduro participate in a march to swear in the Bolivarian Grassroots Committees in Caracas, Venezuela, on November 15, 2025. (Photo by Pedro Mattey/Anadolu via Getty Images)
Supporters of President Nicolás Maduro participate in a march to swear in the Bolivarian Grassroots Committees in Caracas, Venezuela, on Nov. 15, 2025. Photo: Pedro Mattey/Anadolu via Getty Images

Alain Stephens is an investigative reporter covering gun violence, arms trafficking, and federal law enforcement.

The United States is amassing power off Venezuela’s coast. Warships, Marine detachments, and surveillance aircraft are flowing into the Caribbean under the banner of “counter-narcotics operations.” Military officials have presented Donald Trump with various game plans for potential operations. The U.S. president is openly tying Nicolás Maduro to narco-terror networks and cartel structures, while dangling both “talks” and threatening the use of military force in the same breath . It’s all pushing toward the culmination of crowning Maduro and his government America’s next top “ terrorists ” — the magic movie-script label that means the bombs can start heating up.

Then comes the media warm-up act: a New York Times op-ed by Bret Stephens, published on Monday, assuring readers in “The Case for Overthrowing Maduro” that this is all modest, calibrated, even reasonable .

“The serious question is whether American intervention would make things even worse,” Stephens writes. “Intervention means war, and war means death. … The law of unintended consequences is unrepealable.”

The column’s argument is simple: Relax. This isn’t Iraq, a conflict Stephens helped cheerlead our way into and proudly declared in 2023 that two decades later, he doesn’t regret supporting the war.

“There are also important differences between Venezuela and Iraq or Libya,” he continues. “They include Trump’s clear reluctance to put U.S. boots on the ground for any extended period. And they include the fact that we can learn from our past mistakes.”

Venezuela, Stephens argues, provides grounds for intervention against criminals in a failing state. Maduro is corrupt, the threat is real, and Trump’s moves are not the opening shots of a war but the necessary application of restrained power. It’s an argument Americans have heard before. And it’s as familiar as the hardware now cruising toward Caracas.

Everything Old Is New Again

The echoes of Iraq are everywhere: the moral certainty, the insistence on a narrow mission, laws stretched to accommodate force, the journalist class nudging readers toward the idea of escalation. The Times leans on that posture — the intellectual confidence that if a dictator is cruel enough, if his country is chaotic enough, then U.S. firepower is not only justified but prudent and even moral.

But step back. There’s nothing limited about an aircraft carrier strike group, including the world’s largest warship , moving into position near a country the United States has spent years sanctioning, isolating, and trying to politically dislodge. There’s nothing modest about weaving “ narco-terrorism ” into the policy narrative, a label that conveniently sidesteps congressional authorization. And there’s nothing reassuring about the president telling reporters he’s open to “talks,” while simultaneously telegraphing retaliatory force if Maduro doesn’t yield.

This is not law enforcement. It is coercive statecraft backed by military power. And when the press uncritically repeats the administration’s framing, the escalation becomes easier to swallow.

We’ve Seen This Movie Before

Iraq should have been the end of innocence in American foreign-policy thinking. We toppled Saddam Hussein; what followed was not liberation but vacuum . Power didn’t flow to democratic institutions — it scattered, producing insurgency, sectarian collapse, and a national debt Americans will never pay off.

We’ve watched this choreography before too. In 2002, the Washington Post assured readers that toppling Saddam and invading Iraq would be — I kid you not — a “ cakewalk .” But the New York Times once again led the way: A 2001 piece titled The U.S. Must Strike at Saddam Hussein framed Saddam as driven by “hatred intensified by a tribal culture of the blood feud”, and that preemptive war was America’s moral duty. By 2003, the Times was profiling “ Liberals for War ,” laundering the idea that even longtime doves were ready to get on board.

And then there was the big one: In September 2002, the front-page report insisting Iraq’s access to “aluminum tubes” was “ intensifying its quest for bomb parts ,” a claim that became one of the Bush administration’s most potent talking points despite falling apart under scrutiny. Less than two years later, the Times quietly admitted what the country already knew: Its coverage “ wasn’t as rigorous as it should have been ” — an apology that did nothing for the dead, the displaced, or the war that never ended.

The argument that a conflict with Venezuela is any different hinges on the fantasy that U.S. firepower can topple a foreign regime without creating irreversible instability. But Venezuela is already in economic freefall . Its state infrastructure is brittle. A miscalculation — a strike, a naval confrontation, a retaliatory move from Maduro — could fracture what remains of the country’s governance.

Even in articles and political rhetoric selling the safe insistence this isn’t anything like Iraq, it’s hitting the familiar beats: Redefine the battlefield as a courtroom , call the targets “terrorists,” and pretend the spectators won’t notice. It’s the old Washington parlor trick — war recast as paperwork, missiles disguised as “measured responses.” But beneath the soothing language is the real hazard: This posture locks the United States into a glide path toward escalation. It casts Maduro as a stationary object America can strike without consequence, right up until he isn’t. Because the moment a U.S. service member dies in some hillside village most Americans couldn’t find on a map last week, or a destroyer gets hit by something unseen in the dark , the mission will shed every polite euphemism. It won’t be “limited.” It won’t be “ precision interdictions .” It will become the only war frame Washington and the political media never hesitates to embrace: American vengeance, expansive and unbounded.

The Myth of “Limited” War

The press should be asking harder questions, not just about the Pentagon’s talking points, but about what kind of wars we’re willing to inherit. What do we expect these campaigns to become once they outlast the news cycle and the political administration that started them? What do they cost us in dollars, in decades, in the quiet bleed of national attention? Americans are already living through a squeezed economy ; we can’t afford another open-ended conflict with the only measure of success being the upkeep of a strained momentum to throw bodies and dollars at finishing what we ultimately started.

But that’s easy to forget from a corner suite in Washington or a standing desk in Manhattan. From that distance, war looks like a policy instrument, a rhetorical jousting match, an intellectualized game played on someone else’s terrain. But the last two decades of living through America’s post-Iraq unraveling should have taught us otherwise. A sharper press, the right questions, and a robust, skeptical stance toward American intervention abroad could have spared lives: service members lost to missions with no endpoint , civilians flattened as “collateral damage,” entire regions left to absorb the shockwaves long after Washington moved on.

That’s the distance the press should be interrogating — between the people who greenlight these missions and the people who have to live inside them. Because if we don’t ask these questions now, we’ll end up asking them years later, after the bills come due and the country pretends it never saw this coming.

New ShadowRay attacks convert Ray clusters into crypto miners

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 20:56:00
A global campaign dubbed ShadowRay 2.0 hijacks exposed Ray Clusters by exploiting an old code execution flaw to turn them into a self-propagating cryptomining botnet. [...]...
Original Article

New ShadowRay attacks convert Ray clusters into crypto miners

A global campaign dubbed ShadowRay 2.0 hijacks exposed Ray Clusters by exploiting an old code execution flaw to turn them into a self-propagating cryptomining botnet.

Developed by Anyscale, the Ray open-source framework allows building and scaling AI and Python applications in a distributed computing ecosystem organized in clusters, or head nodes.

According to researchers at runtime security company Oligo, a threat actor they track as IronErn440 is using AI-generated payloads to compromise vulnerable Ray infrastructure that is reachable over the public internet.

Wiz

They say that the malicious activity goes beyond cryptocurrency mining, and in some cases, it includes data and credentials theft, as well as deploying distributed denial-of-service (DDoS) attacks.

New campaign, same (unfixed) flaw

ShadowRay 2.0 is the continuation of another ShadowRay campaign , also exposed by Oligo, which ran between September 2023 and March 2024.

Oligo researchers found that an old critical vulnerability tracked as CVE-2023-48022 was exploited in both campaigns. The security issue did not receive a fix as Ray was designed to run in a trusted environment described as a "strictly-controlled network environment."

However, the researchers say that there are more than 230,000 Ray servers available on the internet, a huge spike from "the few thousand we observed during our initial ShadowRay discovery."

In a report today, Oligo says that it observed two attack waves , one that abused GitLab for payload delivery and terminated on November 5, and one that abuses GitHub, which has been ongoing since November 17.

Malicious GitHub repository
Malicious GitHub repository
Source: Oligo Security

Payload capabilities

Oligo says that the payloads used in attacks were generated with the help of large language models. This conclusion was based on the analysis of code structure, the comments available, and the error handling patterns.

For instance, after deobfuscating one of the payloads, the researchers noticed that it contained "docstrings and useless echoes, which strongly implies the code is LLM-generated."

Part of the payload
Part of the payload
Source: Oligo Security

The attacks leverage CVE-2023-48022 to submit jobs to Ray’s unauthenticated Jobs API to run multi-stage Bash and Python payloads, and use the platform’s orchestration to deploy malware across all nodes, enabling autonomous cluster-to-cluster spreading.

The crypto-mining module also appears to be AI-generated and checks available CPU and GPU resources as well as type of access. Inside the payload code, the researchers found that the attacker appreciates a system with at least eight cores and root privileges, calling it "a very good boy."

It uses XMRig to mine for Monero and makes sure that it uses only 60% of the processing power to evade immediate detection.

Oligo found that the miners are dropped in deceptive file locations and use fake process names like ‘ dns-filter ’ to keep the activity under the radar. Persistence is achieved via cron jobs and systemd modifications.

Another interesting find is that the attacker makes sure that they are the only ones exploiting the compromised Ray Cluster for mining purposes and terminates any other rival mining scripts. Additionally, they block other mining pools via /etc/hosts and iptables .

Miner configuration
Miner configuration
Source: Oligo Security

Apart from crypto-mining, the malware opens multiple Python reverse shells to the attacker infrastructure for interactive control, allowing access and potential exfiltration of workload environment data, MySQL credentials, proprietary AI models, and source code stored on the cluster.

It can also launch DDoS attacks using the Sockstress tool, which exploits asymmetric resource consumption by opening large numbers of TCP connections through raw sockets.

Looking at the attacker-created cron jobs, Oligo says that a script is executed every 15 minutes to check the GitHub repository for updated payloads.

Setting the persistence mechanism
Setting the persistence mechanism
Source: Oligo Security

Defending against ShadowRay 2.0

Since there’s no available fix for CVE-2023-48022, Ray users are recommended to follow the vendor-recommended “best practices” when deploying their clusters.

Anyscale has also published an update on the topic after the first ShadowRay campaign was discovered, listing several recommendations, which include deploying Ray in a secure, trusted environment.

Clusters should be protected from unauthorized access using firewall rules and security group policies.

Oligo also suggests adding authorization on top of the Ray Dashboard port (8265 by default) and implementing continuous monitoring on AI clusters to identify anomalous activity.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Google Antigravity

Simon Willison
simonwillison.net
2025-11-18 20:52:35
Google Antigravity Google's other major release today to accompany Gemini 3 Pro. At first glance Antigravity is yet another VS Code fork Cursor clone - it's a desktop application you install that then signs in to your Google account and provides an IDE for agentic coding against their Gemini models....
Original Article

Google Antigravity . Google's other major release today to accompany Gemini 3 Pro . At first glance Antigravity is yet another VS Code fork Cursor clone - it's a desktop application you install that then signs in to your Google account and provides an IDE for agentic coding against their Gemini models.

When you look closer it's actually a fair bit more interesting than that.

The best introduction right now is the official 14 minute Learn the basics of Google Antigravity video on YouTube, where product engineer Kevin Hou (who previously worked at Windsurf) walks through the process of building an app.

There are some interesting new ideas in Antigravity. The application itself has three "surfaces" - an agent manager dashboard, a traditional VS Code style editor and deep integration with a browser via a new Chrome extension. This plays a similar role to Playwright MCP, allowing the agent to directly test the web applications it is building.

Antigravity also introduces the concept of "artifacts" (confusingly not at all similar to Claude Artifacts ). These are Markdown documents that are automatically created as the agent works, for things like task lists, implementation plans and a "walkthrough" report showing what the agent has done once it finishes.

I tried using Antigravity to help add support for Gemini 3 to by llm-gemini plugin.

Screenshot of the VS Code interface showing an implementation plan to update the llm-gemini library to support the thinking_level parameter for Gemini 3 Pro Preview, with the Open Agent Manager sidebar active on the right.

It worked OK at first then gave me an "Agent execution terminated due to model provider overload. Please try again later" error. I'm going to give it another go after they've had a chance to work through those initial launch jitters.

GitHub Down

Hacker News
news.ycombinator.com
2025-11-18 20:41:50
Comments...
Original Article

Seeing: """ fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists. """

From multiple accounts across multiple orgs + repos.

Edit: there it is https://www.githubstatus.com

crux: Cross-platform app development in Rust

Lobsters
github.com
2025-11-18 20:40:59
Comments...
Original Article

Watch the introductory talk | Read the book | Read API docs | Join Zulip community

Cross-platform app development in Rust

  • Shared Core for Behavior - Crux helps you share your app's business logic and behavior across mobile (iOS/Android) and web — as a single reusable core built with Rust.
  • Thin Shell for UI - Crux recognizes that the best experiences are built with modern declarative frameworks such as SwiftUI , Jetpack Compose , React / Vue , or a WebAssembly based framework (like Leptos , or Yew ) — however, it aims to keep this UI layer as thin as it can be, with all other work done by the shared core.
  • Type Generation - the interface with the core has static type checking across languages — types and serialization code are generated for Swift, Kotlin and TypeScript. Rust shells can import the core directly.
  • Managed effects - Side effects such as calling an API are captured as values and executed by the Shell. The core stays side-effect free, making it portable across platforms and allowing high-level user journey tests to run in milliseconds (rather than minutes or even hours)

Getting Started

Learn how to use Crux in your project .

Read the API documentation

Watch the introductory talk at the recent Rust Nation 2023 conference in London.

You can also join the friendly conversation on our Zulip channel .

Note

Crux is pre 1.0 and under active development. It is production-ready, but occasional breaking changes to the API can be expected. We do our best to limit the extent of these and provide a smooth, gradual migration path

Architectural Overview

Logical architecture

Crux has managed side-effects, it strictly separates pure computational tasks from tasks that cause side effects. This is similar to the way Elm works.

Side-effect-free core

In the above diagram, the inner "Core" is compiled and linked to the outer "Shell" on each platform as a library:

  • On iOS as a native static library
  • On Android as a dynamic library using Java Native Access
  • In a browser as a WebAssembly module

In fact, because WebAssembly (Wasm) is one of the compilation targets, the core must remain side-effect free, due to the sandboxed nature of the Wasm runtime environment.

As such, the core is completely isolated and secure against software supply-chain attacks, as it has no access to any external APIs. All it can do is perform pure calculations and keep internal state.

Following the Elm architecture, the core defines the key component types within the application:

  • Event — an enum describing the events which the core can handle
  • Model — describes the internal state of the application
  • Effect – the kinds of side-effects the core will request
  • ViewModel — represents information that should be displayed to the user

The first three are tied together by the update function, familiar from Elm, Redux or other event sourcing architectures, which currently has this type signature:

fn update(
    &self,
    msg: Event,
    model: &mut Model,
    _caps: &Capabilities, // soon to be deprecated
) -> Command<Effect, Event> {
    // ...
}

The job of the update function is to process an Event , update the model accordingly, and potentially request some side-effects.

Note

The Capability API is being deprecated in favour of a more flexible Command API.

To learn more about the new Command API , see the page in the book that describes Managed Effects , or look at examples/counter .

Application Shell

The enclosing platform native "Shell" is written using the language appropriate for the platform, and acts as the runtime environment within which all the non-pure tasks are performed. From the perspective of the core, the shell is the platform on which the core runs.

Testing

Tests can act as another Shell, exercising the Core in the same way a real app would, observing and resolving the requested effects and checking the model and view model are correct. No need for fakes, mocks or stubs.

Communication Between the Application Shell and the Core

Following the Elm architecture, the interface with the core is message based. To perform any task that creates a side-effect (such as an HTTP call or random number generation), the core must request it from the shell as an Effect . Effects support fire-and-forget, request/response, and streaming semantics.

Crux has a concept of Capabilities — reusable interfaces for common side-effects which can be used in the Core as a more ergonomic API.

The only built-in capability is Render . But this repository contains a few capabilities at various stages of maturity, and you can easily write your own if you want to:

crux

Published capabilities

  1. Render (ask UI to render the ViewModel) — source , built-in to crux_core , request only
  2. Http (full HTTP implementation based on the Surf API) — source , crate , request/response
  3. KeyValue (basic key-value store API) — source , crate , request/response
  4. Time (get current time, notify after duration, notify at instant) — source , crate , request/response
  5. Platform (get the current platform) — source , crate , request/response

Example custom capabilities

  1. SSE (basic Server-Sent Events) — source , request/streaming
  2. PubSub (pub sub with streaming) — source , request/response/streaming
  3. Delay — part of tutorial in the book

Foreign Function Interface with type generation

The core API interface is very minimal:

  • process_event: Event -> Vec<Request> - processes a user interaction event and potentially responds with capability requests. This is the API for the driving side in the above diagram.
  • handle_response: (uuid, SomeResponse) -> Vec<Request> - handles the response from the capability and potentially follows up with further requests. This is the API for the driven side in the above diagram.
  • view: () -> ViewModel - provides the shell with the current data for displaying user interface

The Foreign Function Interface allowing the shell to call the above functions is provided by Mozilla's UniFFI on a mobile device, or in the browser, by wasm-pack .

In order to both send more complex data than UniFFI currently supports, and enforce the message passing semantics, all messages are serialized, sent across the boundary, then deserialized using serde-generate which also provides type generation for the foreign (non-Rust) languages.

This means that changes to types in the core, especially the Event and Request types, propagate out into the shell implementations and cause type errors where appropriate (such as an exhaustive match on an enum check).

Message Types

Three types of message are exchanged between the application and the core.

  • Messages of type Event are sent from the Shell to the Core in response to an event happening in the user interface (the driving side). They start a potential sequence of further message exchanges between the shell and the core. Messages are passed on unchanged.
  • Messages of type Request are sent from the Core to the Shell to request the execution of some side-effect-inducing task. The Core responds with zero or more Request messages after receiving an Event message (the driven side).
  • Response messages are sent from the Shell to the Core carrying the result of an earlier request.

Request messages contain the inputs for the requested side-effect, along with a id used by the core to pair requests and their responses together. The exact mechanics are not important, but it is important for the request's id to be passed on to the corresponding response.

Example Message Exchange Cycle

A typical message exchange cycle may look like this:

  1. User interaction occurs in the Shell, which results in an event
  2. The Shell handles this event by constructing an Event
  3. The Shell calls the Core's process_event function passing the Event as an argument
  4. The Core performs the required processing, updating both its inner state and the view model
  5. The Core returns one or more Request messages to the Shell (inside an enum tagging the type of request)

In the simplest case, the Core will respond to an Event by returning the single Request - render.

This requests that the Shell re-renders the user interface. When Render is the only response from the Core, the message cycle has completed and the Core has now "settled".

In more complex cases however, the Core may well return multiple Request s; each of which instructs the Shell to perform a side-effect-inducing task such as:

  • Make a network call, or
  • Fetch the current date/time stamp, or
  • Perform biometric authentication, or
  • Obtain an image from the camera, or
  • Whatever else you can think of...

Many of these side-effecting-inducing tasks are asynchronous. The Shell is responsible for passing responses back to the core (to the handle_response function), which may respond with further requests.

This exchange continues until the core stops requesting further side-effects (typically the last side-effect requested would again be Render ).


Sponsors

Crux is kindly sponsored by the following organizations. Your help is very much appreciated.


Red Badger Consulting Limited

Red Badger logo

Red Badger is the digital product consultancy trusted by blue chips and global brands. Our product design and technical pedigree allow us to craft high-impact digital products customers want. We use modern engineering approaches to deliver sustainable change. And embed digital capabilities to power continuous innovation.


Zulip

Zulip round icon

Zulip is an open-source modern team chat app designed to keep both live and asynchronous conversations organized.

Zulip sponsor Crux by providing our Zulip server — thank you Zulip!


GitHub: Git Operation Failures

Hacker News
www.githubstatus.com
2025-11-18 20:40:46
Comments...
Original Article

Update

Codespaces is experiencing degraded availability. We are continuing to investigate.

Posted Nov 18 , 2025 - 21:25 UTC

Update

We are currently investigating failures on all Git operations, including both SSH and HTTP.

Posted Nov 18 , 2025 - 21:11 UTC

Update

We are seeing failures for some git http operations and are investigating

Posted Nov 18 , 2025 - 20:52 UTC

Update

Git Operations is experiencing degraded availability. We are continuing to investigate.

Posted Nov 18 , 2025 - 20:39 UTC

Investigating

We are currently investigating this issue.

Posted Nov 18 , 2025 - 20:39 UTC

This incident affects: Git Operations and Codespaces.

Oracle is underwater on its 'astonishing' $300B OpenAI deal

Hacker News
www.ft.com
2025-11-18 20:29:35
Comments...
Original Article

Stay informed with free updates

Simply sign up to the Technology sector myFT Digest -- delivered directly to your inbox.

It’s too soon to be talking about the Curse of OpenAI, but we’re going to anyway.

Since September 10, when Oracle announced a $300bn deal with the chatbot maker, its stock has shed $315bn* in market value:

OK, yes, it’s a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $60bn loss figure is not entirely wrong. Oracle’s “ astonishing quarter ” really has cost it nearly as much as one General Motors , or two Kraft Heinz .

Investor unease stems from Big Red betting a debt-financed data farm on OpenAI, as MainFT reported last week. We’ve nothing much to add to that report other than the below charts showing how much Oracle has, in effect, become OpenAI’s US public market proxy:

The theory goes that OpenAI is in a rush to define discover AGI, and Oracle is uniquely able to scale the compute capacity it needs. Oracle promises the lowest upfront costs and fastest path to income generation among the hyperscalers because it’s a data centre tenant rather than the landlord.

Alternatively, Oracle doesn’t have as much operating profit to burn as its competitors, so is throwing everything it can at supporting its one big customer in exchange for an IOU:

At an analyst day last month in Las Vegas, Oracle said it was aiming for cloud computing revenue of $166bn by 2030:

© Oracle company presentation

To get there, Oracle’s capex budget for the current financial year ending May is $35bn. The consensus has annual capex levelling out at around $80bn a year in 2029, after which revenues continue to ramp:

And from 2027, the majority of revenue would be coming from OpenAI:

But Oracle’s net debt is already at 2.5 times ebitda, having more than doubled since 2021, and it’s expected to nearly double again by 2030. Cash flow is forecast to remain negative for five straight years:

So while the OpenAI agreement has been more than written off the equity, the risk of unfunded expansion remains and the cost of hedging Oracle debt is at a three-year high.

We need to add the usual warnings: Credit-default-swap liquidity isn’t great ; the increased demand for Oracle CDS comes after $18bn of bond sales in September ; a CDS premium in the low 100 basis points isn’t that exciting; and some firms taking the other side of the trade are no mugs. Still, pointy.

Beyond the charts, a broader question relates to whether an OpenAI deal is still worth announcing.

A few months ago, any kind of agreement with OpenAI could make a share price go up. OpenAI did very nicely out of its power to reflect glory, most notably in October when it took AMD warrants as part of a chip deal that bumped share price by 24 per cent.

But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September. Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle.

* Calculation and graph updated at 11am GMT for shares outstanding, and text tweaked at 2pm GMT to reflect a less clickbait headline

Further reading:
Oracle’s astonishing jam-tomorrow OpenAI trade (FTAV)

Blender 5.0 released

Linux Weekly News
lwn.net
2025-11-18 20:22:45
Version 5.0 of the Blender animation system has been released. Notable improvements include improved color management, HDR capabilities, and a new storyboarding template. See the release notes for a lengthy list of new features and changes, and the bugfixes page for the 588 commits that fixed bugs i...
Original Article

[Posted November 18, 2025 by jzb]

Version 5.0 of the Blender animation system has been released. Notable improvements include improved color management, HDR capabilities, and a new storyboarding template. See the release notes for a lengthy list of new features and changes, and the bugfixes page for the 588 commits that fixed bugs in Blender 4.5 or older.



The Final Straw: Why Companies Replace Once-Beloved Technology Brands

Hacker News
www.functionize.com
2025-11-18 20:20:58
Comments...
Original Article
What causes a business to abandon hardware, software, or tools it once relied on? Enumerating the common reasons helps you recognize when it’s time to move on.

Once upon a time, your company ran Lotus 1-2-3 and WordPerfect on Novell servers, wrote its custom applications in Turbo Pascal and Visual Basic, and used Nortel or Cascade Communications to connect to the nascent Internet, which you still unapologetically described as the Information Superhighway.

Don’t snort in derision. Your business relied on those tools and technologies because they worked . Users knew the product’s features. IT support understood its configuration foibles. The budget was predictable, and those vendors were trustworthy, safe choices.

But life moves on. Despite company and user loyalty, at some point, someone chose to replace that software, hardware, or infrastructure with something else. What made the business stop using them?

This isn’t an idle #GetOffMyLawn contemplation. Change management is a regular concern for CIOs, IT managers, and the CFOs who glower at them. The best way to determine, “Is it time to abandon this known supplier?” is to contemplate why enterprise organizations left behind the established brands of their past.

Companies usually hold onto existing hardware, software, computing environments, programming languages, databases, or whatever, let’s call it the Turbo Encabulator, for as long as possible. This is supported by file formats, established workflows, supplier contracts, and other elements that contribute to corporate inertia and technical debt.

Eventually, a “final straw” tips the balance, and the organization commits to a transition. “That does it!” they say. “It’s time to switch.”

Here’s how I categorize those final straws, which may operate singly or as part of a hay bale.

There are two kinds of fools. One says, “This is old, and therefore good.” And one says, “This is new, and therefore better.”
Dean Inge

Functionality . It no longer does what you need. Or it doesn’t do it the right way.

The ideal scenario is the path of innovation: Better products replace obsolete ones.

In that situation, the Turbo Encabulator’s once-unique capability becomes commonplace. Now, another supplier has useful, new, and attractive features. Those capabilities may be related to product operation rather than product function, such as vastly improved performance, trustworthy security, better automation, or other “how it works” stuff.

Organizations have to be careful when they seek new functionality, warding against buzzword snipe hunts (surely you don’t need examples of AI-infused anything?). Just because the functionality is new doesn’t mean it’s relevant and valuable.

Quality . “Good enough” is no longer good enough.

When the user experience sucks, it’s time to look for alternatives.

The predictable examples: New versions are buggy. Connectivity fails. Reliability falters. Tech support is slow, incompetent, or difficult to access. You can’t find a knowledgeable human to answer questions.

Less obvious quality failures: A new version is too different. New features make the user experience inefficient, complicated, or disorganized. For instance, the new Turbo Encabulator Pro changes its dashboards or APIs, so experienced users must relearn how to interact with them. The vendor repeats this error with the next version, too. Hosting companies turn “Where to modify DNS records” into a scavenger hunt. Or the vendor dumbs down the Turbo Encabulator features to cater to non-tech-savvy users, obfuscating the system for experts.

Integration issues . The leg bone no longer connects to the thigh bone.

Plenty of businesses can and do hold onto vintage computing technologies because those internal systems don’t need to talk to anything outside the company. For instance, the San Francisco Muni Metro light railway, launched in 1980, boots its Automatic Train Control System from a floppy disk. And companies stay with older versions of operating systems as long as they can find printer drivers. That’s fine - until it isn’t.

Eventually, business and technical standards move on. For instance, a business may be compelled to migrate to a new system when the only way to connect to a data provider is with an API that the Turbo Encabulator does not support. Application changeovers got underway when clients demanded Word files even though the company was internally happy with WordPerfect.

Environmental shifts encourage other migrations. The decision to change one element causes people to reconsider the overall architecture. When companies adopted Windows, they often changed application vendors (from Lotus 1-2-3 to Microsoft Excel). When enterprise organizations moved to cloud environments, they considered alternative suppliers that had grown up with the Internet rather than wait for an established provider to scramble to keep up.

Loss of confidence in the provider . When trust disappears, it’s over.

It’s common for customers to jump ship when a provider is in financial or other trouble. Management shakeups, tech leadership changes, and product cancellations are tip-offs. So are missed ship dates.

Mergers and acquisitions make businesses nervous. Look at the uncertainty when Broadcom acquired VMware , accompanied by layoffs and fears about dismantling product lines. [CJ1]

The lack of confidence may not directly relate to a supplier’s financial future. The company could change AI data collection policies or data sharing practices (such as Sonos ), drop its support for open source (as Redis did ), change the structure of enterprise licensing, or publicly engage in corporate drama . It could be personal, too, such as when a reliable technical contact jumps ship to a competitor.

Price . The cost goes up significantly without adding value.

Money is the obvious reason. It’s easy to connect cost issues to the organization’s bottom line. If a product isn’t worth the investment, a replacement is justified. You might put up with other problems, but if it costs too much? Game over.

Cost concerns drove enterprise acceptance of open source. Techies extolled the open-source advantages in features and support for years. Yet, the corporate transition to Linux occurred only after financial decision-makers realized the significant savings they could achieve.

Need examples? I know one team that switched project management systems when the entrenched application – for which staff had created dozens of integrations and other customizations – raised its licensing fee to double that of a perfectly adequate competitor. Nobody needed any of the “new and improved” features. Variations: Unity’s price increase was so egregious that it had to roll it back. The photo-sharing site Flickr’s subscription model changed so much that, for a while, it couldn’t even attract free subscribers .

Peopleware. Decisionmakers make political or emotional choices.

Sometimes, the final straw has nothing to do with the product, which still works great. Personal politics and corporate favoritism often override other factors.

These examples may sound familiar to experienced IT workers:

  • A new CEO prefers a different product and insists the company switch.
  • A new IT manager sees no value in the Turbo Encabulator, emphasizes different areas, or is sure it could be built better and cheaper internally. An executive reads an article about “Why everyone should…” and – on what others might charitably call a whim – insists the company make a significant change.
  • New leadership wants to prove its worth by eliminating what the last team acquired or built.

Those peopleware issues do not emanate only from the C-suite. For instance, migrations happen after the people who initially adopted the Turbo Encabulator leave the company. With the team membership changing, different preferences may prevail, justly or otherwise. Given an opportunity, some teams choose to switch away from established but hated tools out of spite.

It’s the final straw. Not the only one.

Sometimes, it’s hard to discern which element is the final straw because everything is interconnected. A security breach causes your company management to lose trust in the vendor. An established provider increases its subscription pricing at the same point where users complain about the new version’s bugs. The organization likes everything about a computing environment, but the ecosystem doesn’t support a critical need, such as an application vendor dropping support for a favored operating system. But I think they all fall in one or another of these categories.

In most cases, the Turbo Encabulator providers – the vendors, open-source developers, and product designers – earnestly want to keep customers. Sometimes, they stumble: A competitor makes a genuinely innovative advancement or is faster to comply with an attractive buzzword. In other circumstances, they make avoidable mistakes, such as deploying confusing user interfaces or choosing greed over customer delight.

Esther Schindler began writing about the intersection of technology and business before your company ran Lotus 1-2-3 and WordPerfect on Novell servers .

Jerry Seinfeld and the "too sensitive" left

Hacker News
reedybear.bearblog.dev
2025-11-18 20:16:20
Comments...
Original Article

ReedyBear's Blog

"Jerry Seinfeld can't do comedy any more because people are too sensitive," someone said to me.

And I'm immediately defending those who would cancel him, justifying it.

I haven't listened to his comedy in years. I don't follow him. I don't know if he's racist or sexist, or if his comedy is even politically incorrect. And if it isn't PC, I don't know if it's in good faith or not.

I've been so bothered by right wingers complaining that they can't even say anything anymore that I immediately took a position on a topic I know almost nothing about.

Oh, Jerry Seinfeld does still do comedy though. He has like 40-50 gigs from now through May.

But yeah, I guess he says he won't play college campuses any more because of students who call him racist or sexist.

His complaints might be justified and his comedy might be good. I honestly don't know, and I don't want to jump to conclusions like I did. I also don't want to accept his complaints at face value either. I just want to ... not know, and be okay with that.

#blog #politics

The Only Olivia Nuzzi-RFK Jr. Take You Need to Read

hellgate
hellgatenyc.com
2025-11-18 20:11:28
Your group chat isn’t talking about this part of it—but they should be....
Original Article
The Only Olivia Nuzzi-RFK Jr. Take You Need to Read
(AP Photo / Jose Luis Magana; White House / Abe McNatt; Pinterest of Darkblade 666; Rachel Cook / Unsplash)

Fresh Hell
Power

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

A surprise with how '#!' handles its program argument in practice

Lobsters
utcc.utoronto.ca
2025-11-18 19:50:34
Comments...
Original Article

You're probably reading this page because you've attempted to access some part of my blog (Wandering Thoughts) or CSpace , the wiki thing it's part of. Unfortunately whatever you're using to do so has a HTTP User-Agent header value that is too generic or otherwise excessively suspicious. Unfortunately, as of early 2025 there's a plague of high volume crawlers (apparently in part to gather data for LLM training) that behave like this. To reduce the load on Wandering Thoughts I'm experimenting with (attempting to) block all of them, and you've run into this.

All HTTP User-Agent headers should clearly identify what they are, and for non-browser user agents, they should identify not just the software involved but also who specifically is using that software. An extremely generic value such as " Go-http-client/1.1 " is not something that I consider acceptable any more.

Chris Siebenmann, 2025-02-17

When You Give a Bully Your Lunch Money

Daring Fireball
bsky.app
2025-11-18 19:37:56
President Donald Trump, today in the Oval Office, after being asked a question by an ABC News reporter regarding the Epstein files: People are wise to your hoax. ABC, your company, your crappy company, is one of the perpetrators. And I’ll tell you something, I think the license should be taken a...

HOPE Hacking Conference Banned From University Venue Over Apparent ‘Anti-Police Agenda’

403 Media
www.404media.co
2025-11-18 19:32:14
"The lack of due process on its own is extremely disturbing," the conference said....
Original Article

The legendary hacker conference Hackers on Planet Earth (HOPE) says that it has been “banned” from St. John’s University, the venue where it has held the last several HOPE conferences, because someone told the university the conference had an “anti-police agenda.”

HOPE was held at St. John’s University in 2022, 2024, and 2025, and was going to be held there in 2026, as well. The conference has been running at various venues over the last 31 years, and has become well-known as one of the better hacking and security research conferences in the world. Tuesday, the conference told members of its mailing list that it had “received some disturbing news,” and that “we have been told that ‘materials and messaging’ at our most recent conference ‘were not in alignment with the mission, values, and reputation of St. John’s University’ and that we would no longer be able to host our events there.”

The conference said that after this year’s conference, they had received “universal praise” from St. John’s staff, and said they were “caught by surprise” by the announcement.

“What we're told - and what we find rather hard to believe - is that all of this came about because a single person thought we were promoting an anti-police agenda,” the email said. “They had spotted pamphlets on a table which an attendee had apparently brought to HOPE that espoused that view. Instead of bringing this to our attention, they went to the president's office at St. John's after the conference had ended. That office held an investigation which we had no knowledge of and reached its decision earlier this month. The lack of due process on its own is extremely disturbing.”

“The intent of the person behind this appears clear: shut down events like ours and make no attempt to actually communicate or resolve the issue,” the email continued. “If it wasn't this pamphlet, it would have been something else. In this day and age where academic institutions live in fear of offending the same authorities we've been challenging for decades, this isn't entirely surprising. It is, however, greatly disappointing.”

St. John’s University did not immediately respond to a request for comment. Hacking and security conferences in general have a long history of being surveilled by or losing their venues. For example, attendees of the DEF CON hacking conference have reported being surveilled and having their rooms searched; last year, some casinos in Las Vegas made it clear that DEF CON attendees were not welcome. And academic institutions have been vigorously attacked by the Trump administration over the last few months over the courses they teach, the research they fund, and the events they hold, though we currently do not know the specifics of why St. John’s made this decision.

It is not clear what pamphlets HOPE is referencing, and the conference did not immediately respond to a request for comment, but the conference noted that St. Johns could have made up any pretext for banning them. It is worth mentioning that Joshua Aaron, the creator of the ICEBlock ICE tracking app, presented at HOPE this year. ICEBlock has since been deleted by the Apple App Store and the Google Play store after being pressured by the Trump administration.

“Our content has always been somewhat edgy and we take pride in challenging policies we see as unfair, exposing security weaknesses, standing up for individual privacy rights, and defending freedom of speech,” HOPE wrote in the email. The conference said that it has not yet decided what it will do next year, but that it may look for another venue, or that it might “take a year off and try to build something bigger.”

“There will be many people who will say this is what we get for being too outspoken and for giving a platform to controversial people and ideas. But it's this spirit that defines who we are; it's driven all 16 of our past conferences. There are also those who thought it was foolish to ever expect a religious institution to understand and work with us,” the conference added. “We are not changing who we are and what we stand for any more than we'd expect others to. We have high standards for our speakers, presenters, and staff. We value inclusivity and we have never tolerated hate, abuse, or harassment towards anyone. This should not be news, as HOPE has been around for a while and is well known for its uniqueness, spirit, and positivity.”

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

ACLU and EFF Sue a City Blanketed With Flock Surveillance Cameras

403 Media
www.404media.co
2025-11-18 19:31:36
“Most drivers are unaware that San Jose’s Police Department is tracking their locations and do not know all that their saved location data can reveal about their private lives and activities."...
Original Article

Lawyers from the American Civil Liberties Union (ACLU) and Electronic Frontier Foundation (EFF) sued the city of San Jose , California over its deployment of Flock’s license plate-reading surveillance cameras, claiming that the city’s nearly 500 cameras create a pervasive database of residents movements in a surveillance network that is essentially impossible to avoid.

The lawsuit was filed on behalf of the Services, Immigrant Rights & Education Network and Council on American-Islamic Relations, California, and claims that the surveillance is a violation of California’s constitution and its privacy laws. The lawsuit seeks to require police to get a warrant in order to search Flock’s license plate system. The lawsuit is one of the highest profile cases challenging Flock; a similar lawsuit in Norfolk, Virginia seeks to get Flock’s network shut down in that city altogether.

“San Jose’s ALPR [automatic license plate reader] program stands apart in its invasiveness,” ACLU of Northern California and EFF lawyers wrote in the lawsuit. “While many California agencies run ALPR systems, few retain the locations of drivers for an entire year like San Jose. Further, it is difficult for most residents of San Jose to get to work, pick up their kids, or obtain medical care without driving, and the City has blanketed its roads with nearly 500 ALPRs.”

The lawsuit argues that San Jose’s Flock cameras “are an invasive mass surveillance technology” that “collect[s] driver locations en masse.”

“Most drivers are unaware that San Jose’s Police Department is tracking their locations and do not know all that their saved location data can reveal about their private lives and activities,” it adds. The city of San Jose currently has at least 474 ALPR cameras, up from 149 at the end of 2023; according to data from the city, more than 2.6 million vehicles were tracked using Flock in the month of October alone. The lawsuit states that Flock ALPRs are stationed all over the city, including “around highly sensitive locations including clinics, immigration centers, and places of worship. For example, three ALPR cameras are positioned on the roads directly outside an immigration law firm.”

Andrew Crocker, surveillance litigation director for the EFF, told 404 Media in a phone call that “it’s fair to say that anyone driving in San Jose is likely to have their license plates captured many times a day. That pervasiveness is important.”

DeFlock's map of San Jose's ALPRs
A zoomed in look at San Jose

A search of DeFlock, a crowdsourced map of ALPR deployments around the country, shows hundreds of cameras in San Jose spaced essentially every few blocks around the city. The map is not exhaustive.

The lawsuit argues that warrantless searches of these cameras are illegal under the California constitution’s search and seizure clause, which Crocker said “has been interpreted to be even stronger than the Fourth Amendment,” as well as other California privacy laws. The case is part of a broader backlash against Flock as it expands around the United States. 404 Media’s reporting has shown that the company collects millions of records from around the country, and that it has made its national database of car locations available to local cops who have in turn worked with ICE. Some of those searches have violated California and Illinois law, and have led to reforms from the company . Crocker said that many of these problems will be solved if police simply need to get a warrant to search the system.

“Our legal theory and the remedy we’re seeking is quite simple. We think they need a warrant to search these databases,” he said. “The warrant requirement is massive and should help in terms of preventing these searches because they will have to be approved by a judge.” The case in Norfolk is ongoing. San Jose Police Department and Flock did not immediately respond to a request for comment.

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

Windows 11 gets new Cloud Rebuild, Point-in-Time Restore tools

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 19:29:52
Microsoft announced two new Windows 11 recovery features today at the Ignite developer conference, called Cloud Rebuild and Point-in-Time Restore (PITR), that aim to reduce downtime and make it easier to recover from system failures or faulty updates. [...]...
Original Article

Windows 11

Microsoft announced two new Windows 11 recovery features today at the Ignite developer conference, called Cloud Rebuild and Point-in-Time Restore (PITR), that aim to reduce downtime and make it easier to recover from system failures or faulty updates.

The new recovery features are part of Microsoft's Windows Resiliency Initiative and are designed to help organizations quickly restore devices when a device is no longer able to start or function properly.

The first feature, Point-in-Time Restore (PITR), allows users and IT administrators to roll back a Windows 11 system to an earlier, healthy snapshot within minutes.

Wiz

Like System Restore, PITR restores an operating system, its settings, and system files to a previously stored state. However, Point-in-Time Restore builds on the System Restore feature by taking a complete snapshot of the system at different points in time, so it can also restore local files and applications.

Microsoft says this feature will enter preview this week in an upcoming Windows 11 Insider preview build.

Microsoft is also introducing Cloud Rebuild, a tool that can remotely trigger a complete reinstall of Windows 11 from the cloud for devices experiencing persistent problems or becoming inoperable.

"Through the Intune portal, admins will be able to select the desired Windows release and language, triggering the PC to download installation media and rebuild itself," explains Microsoft .

"The process leverages Autopilot for zero-touch provisioning, ensuring MDM enrollment and policy compliance post-rebuild. User data and settings restoration is streamlined via OneDrive and Windows Backup for Organizations. This approach will reduce downtime from hours—or days—to a fraction of that time."

Microsoft says that both of these features will be integrated directly within Microsoft Intune in the first half of 2026, allowing Windows admins to trigger recovery actions remotely, coordinate enterprise-wide remediation, and control Windows Recovery Environment (WinRE) functionality directly from Intune.

Earlier this month, Microsoft began testing an updated version of Quick Machine Recovery (QMR) , a tool designed to help administrators resolve Windows boot failures without requiring physical access to a device.

Quick Machine Recovery in the Advanced Startup menu
Windows 11 Quick Machine Recovery settings
Source: Microsoft

When Windows 11 encounters a boot failure caused by a configuration change, problematic drive, or update, it will automatically launch the Windows Recovery Environment, load QMR, and send crash information to Microsoft.

Based on the analysis of this data, Microsoft can remotely apply fixes such as removing problematic drivers or updates and changing configuration settings.

Microsoft says the latest release improves on the QMR boot-repair process by performing a single scan to detect and resolve issues, rather than repeatedly searching for a solution in a loop.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Quoting Ethan Mollick

Simon Willison
simonwillison.net
2025-11-18 19:24:28
Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini...
Original Article

Three years ago, we were impressed that a machine could write a poem about otters. Less than 1,000 days later, I am debating statistical methodology with an agent that built its own research environment. The era of the chatbot is turning into the era of the digital coworker. To be very clear, Gemini 3 isn’t perfect, and it still needs a manager who can guide and check it. But it suggests that “human in the loop” is evolving from “human who fixes AI mistakes” to “human who directs AI work.” And that may be the biggest change since the release of ChatGPT.

Ethan Mollick , Three Years from GPT-3 to Gemini 3

New EU Chat Control proposal moves forward

Hacker News
techreport.com
2025-11-18 19:16:09
Comments...
Original Article

Key Takeaways:

  • The EU has introduced a new Chat Control proposal: Mandatory scanning is gone, but Article 4’s ‘risk mitigation’ clause could still push services toward scanning private and encrypted messages.
  • Anonymity could be severely limited: Age-verification rules would make anonymous accounts difficult, affecting journalists, whistleblowers, and users who rely on privacy for safety.
  • The scope of scanning is expanding: The proposal allows detection of chat text and metadata, raising concerns about large-scale monitoring across the EU’s 450M citizens.
  • The technology behind it still isn’t viable: Experts say safe CSAM detection in encrypted apps doesn’t exist yet, even Apple abandoned its own client-side scanning system after backlash.
New EU Chat Control Proposal Moves Forward – Privacy Experts See a Dangerous Backdoor.

The Chat Control proposal is back in Brussels. Again.

Lawmakers are treating it like a familiar guest who keeps showing up at the door wearing a slightly different jacket. Privacy experts say the jacket is hiding something sharp.

A revised version of the EU’s Child Sexual Abuse Regulation (CSAR) has now moved from the Law Enforcement Working Party to the Coreper (Committee of Permanent Representatives).

Coreper is the group of permanent representatives from all EU member states, if Coreper likes the text, the Council will adopt its position. After that, the proposal jumps straight into a fast trilogue.

On paper, the new version looks softer. Mandatory scanning of private chats, photos, and URLs was removed. Scanning is now voluntary. Lawmakers seem happy. They might even feel relieved.

Privacy experts, however, are staring at one line in Article 4 like it’s a hidden knife taped under the table.

Let’s break down what actually changed, what didn’t, and why critics say this version may be even worse than the old one.

The ‘Voluntary’ Scanning That Doesn’t Feel Very Voluntary

The Denmark Presidency produced the new compromise after negotiations stalled for over three years.

When the Law Enforcement Working Party met on November 12 , the group accepted it with broad support.

No dissenting votes. No further changes needed. A rare moment of harmony inside the EU Council meeting room.

The key change is the removal of mandatory scanning. Messaging apps will not be forced to scan shared pictures, videos, or URLs. Providers like WhatsApp, Signal, Telegram, and email services can choose to scan for CSAM material.

It sounds like the pressure is gone.

But then Article 4 happens. It includes something vague, flexible, and extremely powerful. It’s called a ‘risk mitigation measure.’ High-risk services may need to apply ‘all appropriate risk mitigation measures.’ The phrase feels harmless until you imagine how governments could interpret it.

article 4 extract from the new chat control proposal.
Source: Proposal for a Regulation of the European Parliament and of the Council

Germany has publicly reaffirmed opposition to a version of the proposal that mandates scanning of encrypted chats. Whether it will maintain that firm stance throughout the negotiations remains uncertain.

Patrick Breyer, digital rights jurist and longtime critic of Chat Control, says this line reintroduces mandatory scanning through the back door. His argument is simple.

If a service is labeled ‘high-risk,’ it might be obliged to scan everything anyway. Even private, end-to-end encrypted content.

Patrick Breyer post on X.

Breyer says this could make client-side scanning mandatory. That is when your phone or laptop scans your messages before encryption kicks in. It essentially turns your device into a small police assistant.

You never asked for that. It’s like buying headphones and discovering they also whisper everything you say back to a security office.

Encryption Isn’t Just a Tech Feature – It’s How Modern Life Works

The biggest concern is the effect on end-to-end encryption. This is the shield that protects private communication on WhatsApp, Signal, and other messengers.

It’s the same shield used by journalists, doctors, activists, lawyers, and everyone who occasionally sends a photo of their passport to a friend for a hotel booking.

How does end-to-end encryption work

Breaking encryption has always been the red line. No government has found a safe way to weaken encryption for criminals without also weakening it for everyone else.

It’s like removing the doors from all apartments in a building because one person is suspected of wrongdoing.

Everyone becomes vulnerable, and burglars get a Black Friday sale they didn’t expect.

The new compromise avoids saying ‘break encryption.’ It uses vague language. However, privacy specialists argue that the outcome remains the same.

If scanning becomes a mandatory risk mitigation measure, encrypted platforms will need to scan content before encryption is applied. That collapses the entire security model.

Anonymous Communication May Also Be on the Line

The Fight Chat Control group published a summary of the new text. They highlight another major change. Anonymous communication becomes nearly impossible.

fight chat control initiative extract.
Source: fightchatcontrol.eu

The proposal requires every user to verify their age before accessing communication services. This eliminates the option to create anonymous accounts.

That affects whistleblowers and journalists. It affects people escaping abusive households. It affects people living under repressive governments who rely on anonymity for safety.

Requiring age verification for every single user is like asking everyone to show their passport before entering a grocery store. It may solve one problem. It creates many more.

Article 6 also includes restrictions that critics call a ‘digital house arrest’ for minors. It bans children from installing many apps associated with grooming risk. The list includes WhatsApp, Instagram, and even online games like Roblox.

Imagine a 15-year-old today without messaging apps or online games. They would end up communicating solely through school assignments and fridge magnets.

Why This Version Worries Experts Even More

The original proposal already concerned privacy advocates. It focused on scanning photos, videos, and URLs for CSAM content.

The new version goes further. Breyer notes it includes scanning of private chat text and metadata. Metadata can reveal who you talk to, how often, and from where you talk to them.

It turns the communication graph of the entire EU population into a map available for inspection.

This shift from media scanning to text scanning is a significant development. It expands what authorities can request. It expands the scope of what companies must monitor to avoid being labeled ‘high-risk.’ And it expands the potential for abuse.

Critics also point out that voluntary scanning does not guarantee privacy.

If one major app decides to comply, others may feel pressure to follow. Competition might turn into a race where the winner is the one who scans the most.

A Political Win, A Technical Minefield

Politically, lawmakers are celebrating. After years of deadlock, they finally have a text that appears less aggressive. Removing mandatory scanning looks like a concession.

It’s easy to present this as a victory for privacy.

Technically, the situation is far from reassuring. The proposal now relies heavily on interpretation. The phrase ‘all appropriate risk mitigation measures’ could mean anything.

It gives enormous discretion to authorities. It lets governments later argue that scanning is essential for safety.

That is why privacy groups call this version a political trick. It removes the scary parts from the front of the bill. Then it grows them back under a different name.

The EU Parliament Will Have Its Say – But History Is Complicated

The next step is Coreper. If they approve the text on November 19 or soon after, the Council will adopt its official position.

Then a trilogue begins between the Council, the Commission, and the European Parliament.

fight chat control extract what's next.
Source: fightchatcontrol.eu

In theory, Parliament could oppose it.

In practice, Parliament has often compromised on surveillance laws after political pressure. Privacy groups fear a rushed trilogue where Parliament gives in to urgency.

The Council and the Commission are now aligned. Both want stronger online monitoring. This alignment alone makes many observers nervous.

The Bigger Story: Europe Keeps Trying to Build Scanning Systems That Don’t Exist

There is a broader theme here. The EU continues to propose scanning systems that experts say cannot operate safely.

The automatic detection of CSAM in encrypted environments remains technically unsolved. Client-side scanning has accuracy issues, privacy concerns, and a potential for misuse.

Even Apple backed away from its own client-side scanning feature after heavy criticism from researchers.

The EU is once again attempting to regulate technology that does not yet exist in a safe form. It is similar to writing a law that requires cars to fly by next summer.

The idea might be noble. The engineering reality is not ready.

Governments want a system that detects serious crimes. Researchers seek a system that malicious actors cannot exploit. Companies want a system that doesn’t destroy trust.

So far, no system satisfies all three.

The Real Test Is About to Begin

The next few days will decide how far the EU is willing to push this plan.

Coreper will review the text, and if nobody objects, the Council will lock in its position fast. Privacy groups and security experts are raising alarms again because the new compromise still creates a path to mass scanning, even if the language looks softer.

The proposal also threatens anonymity and introduces new monitoring routes that could reshape private communication for 450M people in the EU.

Lawmakers call it progress. Experts call it a warning sign.

Everything now depends on how Article 4 is interpreted and how much power it quietly hands over. The final battle will happen in trilogue, and the tech community is already bracing for impact.

Anya Zhukova

Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf , Online Tech Tips , Help Desk Geek , Switching to Mac , and Make Tech Easier . Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework , Insta360 , Redmagic , Inmotion , Secretlab , Kodak , and Anker , reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn , where she shares more about her work and projects. Key Areas of Expertise: Consumer Tech (laptops, phones, wearables, etc.) Cybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.

View all articles by Anya Zhukova

The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, software, hardware, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.

Crypto tracking platform DappRadar shuts down, citing financial woes

Web3 Is Going Great
web3isgoinggreat.com
2025-11-18 19:10:11
Amid a month of falling crypto prices, the crypto tracking platform DappRadar has announced it will be shutting down after seven years of operation. "Running a platform of this scale became financially unsustainable in the current environment," the company announced on Twitter.The company h...
Original Article

Amid a month of falling crypto prices, the crypto tracking platform DappRadar has announced it will be shutting down after seven years of operation. "Running a platform of this scale became financially unsustainable in the current environment," the company announced on Twitter.

The company had previously raised several rounds of financing, with a $2.3 million seed round in 2019 and a $5 million Series A in 2021.

Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark

Hacker News
simonwillison.net
2025-11-18 19:05:44
Comments...
Original Article

18th November 2025

Google released Gemini 3 Pro today. Here’s the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu , their developer blog announcement from Logan Kilpatrick , the Gemini 3 Pro Model Card , and their collection of 11 more articles . It’s a big release!

I had a few days of preview access to this model via AI Studio . The best way to describe it is that it’s Gemini 2.5 upgraded to match the leading rival models .

Gemini 3 has the same underlying characteristics as Gemini 2.5. The knowledge cutoff is the same (January 2025). It accepts 1 million input tokens, can output up to 64,000 tokens, and has multimodal inputs across text, images, audio, and video.

Benchmarks

Google’s own reported numbers (in the model card ) show it scoring slightly higher against Claude 4.5 Sonnet and GPT-5.1 against most of the standard benchmarks. As always I’m waiting for independent confirmation, but I have no reason to believe those numbers are inaccurate.

Table of benchmark numbers, described in full below

Pricing

It terms of pricing it’s a little more expensive than Gemini 2.5 but still cheaper than Claude Sonnet 4.5. Here’s how it fits in with those other leading models:

Model Input (per 1M tokens) Output (per 1M tokens)
GPT-5.1 $1.25 $10.00
Gemini 2.5 Pro ≤ 200k tokens: $1.25
> 200k tokens: $2.50
≤ 200k tokens: $10.00
> 200k tokens: $15.00
Gemini 3 Pro ≤ 200k tokens: $2.00
> 200k tokens: $4.00
≤ 200k tokens: $12.00
> 200k tokens: $18.00
Claude Sonnet 4.5 ≤ 200k tokens: $3.00
> 200k tokens: $6.00
≤ 200k tokens: $15.00
> 200k tokens: $22.50
Claude Opus 4.1 $15.00 $75.00

Trying it out against a complex image

That screenshot of the benchmarks from above looked like a good test for Gemini 3’s multimodal support. I fed it that image URL and asked it to generate alt text for the image:

llm -m gemini-3-pro-preview -a https://static.simonwillison.net/static/2025/gemini-3-benchmarks.jpg 'Alt text for this image, include all figures and make them comprehensible to a screen reader user'

Here’s what I got back:

A comprehensive table comparing four AI models across various performance benchmarks. The columns listed are: Benchmark Name, Description, Gemini 3 Pro (highlighted in blue), Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1.

Humanity’s Last Exam (Academic reasoning)

  • No tools: Gemini 3 Pro 37.5%, Gemini 2.5 Pro 21.6%, Claude Sonnet 4.5 13.7%, GPT-5.1 26.5%.
  • With search and code execution: Gemini 3 Pro 45.8% (others have no data).

ARC-AGI-2 (Visual reasoning puzzles; ARC Prize Verified)

  • Gemini 3 Pro 31.1%, Gemini 2.5 Pro 4.9%, Claude Sonnet 4.5 13.6%, GPT-5.1 17.6%.

GPQA Diamond (Scientific knowledge; No tools)

  • Gemini 3 Pro 91.9%, Gemini 2.5 Pro 86.4%, Claude Sonnet 4.5 83.4%, GPT-5.1 88.1%.

AIME 2025 (Mathematics)

  • No tools: Gemini 3 Pro 95.0%, Gemini 2.5 Pro 88.0%, Claude Sonnet 4.5 87.0%, GPT-5.1 94.0%.
  • With code execution: Gemini 3 Pro 100%, Claude Sonnet 4.5 100%.

MathArena Apex (Challenging Math Contest problems)

  • Gemini 3 Pro 23.4%, Gemini 2.5 Pro 0.5%, Claude Sonnet 4.5 1.6%, GPT-5.1 1.0%.

MMMU-Pro (Multimodal understanding and reasoning)

  • Gemini 3 Pro 81.0%, Gemini 2.5 Pro 68.0%, Claude Sonnet 4.5 68.0%, GPT-5.1 76.0%.

ScreenSpot-Pro (Screen understanding)

  • Gemini 3 Pro 72.7%, Gemini 2.5 Pro 11.4%, Claude Sonnet 4.5 36.2%, GPT-5.1 3.5%.

CharXiv Reasoning (Information synthesis from complex charts)

  • Gemini 3 Pro 81.4%, Gemini 2.5 Pro 69.6%, Claude Sonnet 4.5 68.5%, GPT-5.1 69.5%.

OmniDocBench 1.5 (OCR; Overall Edit Distance, lower is better)

  • Gemini 3 Pro 0.115, Gemini 2.5 Pro 0.145, Claude Sonnet 4.5 0.145, GPT-5.1 0.147.

Video-MMMU (Knowledge acquisition from videos)

  • Gemini 3 Pro 87.6%, Gemini 2.5 Pro 83.6%, Claude Sonnet 4.5 77.8%, GPT-5.1 80.4%.

LiveCodeBench Pro (Competitive coding problems; Elo Rating, higher is better)

  • Gemini 3 Pro 2,439; Gemini 2.5 Pro 1,775; Claude Sonnet 4.5 1,418; GPT-5.1 2,243.

Terminal-Bench 2.0 (Agentic terminal coding; Terminus-2 agent)

  • Gemini 3 Pro 54.2%, Gemini 2.5 Pro 32.6%, Claude Sonnet 4.5 42.8%, GPT-5.1 47.6%.

SWE-Bench Verified (Agentic coding; Single attempt)

  • Gemini 3 Pro 76.2%, Gemini 2.5 Pro 59.6%, Claude Sonnet 4.5 77.2%, GPT-5.1 76.3%.

t2-bench (Agentic tool use)

  • Gemini 3 Pro 85.4%, Gemini 2.5 Pro 54.9%, Claude Sonnet 4.5 84.7%, GPT-5.1 80.2%.

Vending-Bench 2 (Long-horizon agentic tasks; Net worth (mean), higher is better)

  • Gemini 3 Pro $5,478.16; Gemini 2.5 Pro $573.64; Claude Sonnet 4.5 $3,838.74; GPT-5.1 $1,473.43.

FACTS Benchmark Suite (Held out internal grounding, parametric, MM, and search retrieval benchmarks)

  • Gemini 3 Pro 70.5%, Gemini 2.5 Pro 63.4%, Claude Sonnet 4.5 50.4%, GPT-5.1 50.8%.

SimpleQA Verified (Parametric knowledge)

  • Gemini 3 Pro 72.1%, Gemini 2.5 Pro 54.5%, Claude Sonnet 4.5 29.3%, GPT-5.1 34.9%.

MMMLU (Multilingual Q&A)

  • Gemini 3 Pro 91.8%, Gemini 2.5 Pro 89.5%, Claude Sonnet 4.5 89.1%, GPT-5.1 91.0%.

Global PIQA (Commonsense reasoning across 100 Languages and Cultures)

  • Gemini 3 Pro 93.4%, Gemini 2.5 Pro 91.5%, Claude Sonnet 4.5 90.1%, GPT-5.1 90.9%.

MRCR v2 (8-needle) (Long context performance)

  • 128k (average): Gemini 3 Pro 77.0%, Gemini 2.5 Pro 58.0%, Claude Sonnet 4.5 47.1%, GPT-5.1 61.6%.
  • 1M (pointwise): Gemini 3 Pro 26.3%, Gemini 2.5 Pro 16.4%, Claude Sonnet 4.5 (not supported), GPT-5.1 (not supported).

I have not checked every line of this but a loose spot-check looks accurate to me.

That prompt took 1,105 input and 3,901 output tokens, at a cost of 5.6824 cents .

I ran this follow-up prompt:

llm -c 'Convert to JSON'

You can see the full output here , which starts like this:

{
  "metadata": {
    "columns": [
      "Benchmark",
      "Description",
      "Gemini 3 Pro",
      "Gemini 2.5 Pro",
      "Claude Sonnet 4.5",
      "GPT-5.1"
    ]
  },
  "benchmarks": [
    {
      "name": "Humanity's Last Exam",
      "description": "Academic reasoning",
      "sub_results": [
        {
          "condition": "No tools",
          "gemini_3_pro": "37.5%",
          "gemini_2_5_pro": "21.6%",
          "claude_sonnet_4_5": "13.7%",
          "gpt_5_1": "26.5%"
        },
        {
          "condition": "With search and code execution",
          "gemini_3_pro": "45.8%",
          "gemini_2_5_pro": null,
          "claude_sonnet_4_5": null,
          "gpt_5_1": null
        }
      ]
    },

Analyzing a city council meeting

To try it out against an audio file I extracted the 3h33m of audio from the video Half Moon Bay City Council Meeting—November 4, 2025 . I used yt-dlp to get that audio:

yt-dlp -x --audio-format m4a 'https://www.youtube.com/watch?v=qgJ7x7R6gy0'

That gave me a 74M m4a file, which I ran through Gemini 3 Pro like this:

llm -m gemini-3-pro-preview -a /tmp/HMBCC\ 11⧸4⧸25\ -\ Half\ Moon\ Bay\ City\ Council\ Meeting\ -\ November\ 4,\ 2025\ \[qgJ7x7R6gy0\].m4a 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'

That failed with an “Internal error encountered” message, so I shrunk the file down to a more manageable 38MB using ffmpeg :

ffmpeg -i "/private/tmp/HMB.m4a" -ac 1 -ar 22050 -c:a aac -b:a 24k "/private/tmp/HMB_compressed.m4a"

Then ran it again like this (for some reason I had to use --attachment-type this time):

llm -m gemini-3-pro-preview --attachment-type /tmp/HMB_compressed.m4a 'audio/aac' 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'

This time it worked! The full output is here , but it starts like this:

Here is the transcript of the Half Moon Bay City Council meeting.

Meeting Outline

1. Call to Order, Updates, and Public Forum

  • Summary: Mayor Brownstone calls the meeting to order. City Manager Chidester reports no reportable actions from the closed session. Announcements are made regarding food insecurity volunteers and the Diwali celebration. During the public forum, Councilmember Penrose (speaking as a citizen) warns against autocracy. Citizens speak regarding lease agreements, downtown maintenance, local music events, and homelessness outreach statistics.
  • Timestamp: 00:00:00—00:13:25
  • Participants: Mayor Brownstone, Matthew Chidester, Irma Acosta, Deborah Penrose, Jennifer Moore, Sandy Vella, Joaquin Jimenez, Anita Rees.

2. Consent Calendar

  • Summary: The Council approves minutes from previous meetings and a resolution authorizing a licensing agreement for Seahorse Ranch. Councilmember Johnson corrects a pull request regarding abstentions on minutes.
  • Timestamp: 00:13:25—00:15:15
  • Participants: Mayor Brownstone, Councilmember Johnson, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Nagengast.

3. Ordinance Introduction: Commercial Vitality (Item 9A)

  • Summary: Staff presents a new ordinance to address neglected and empty commercial storefronts, establishing maintenance and display standards. Councilmembers discuss enforcement mechanisms, window cleanliness standards, and the need for objective guidance documents to avoid subjective enforcement.
  • Timestamp: 00:15:15—00:30:45
  • Participants: Karen Decker, Councilmember Johnson, Councilmember Nagengast, Vice Mayor Ruddick, Councilmember Penrose.

4. Ordinance Introduction: Building Standards & Electrification (Item 9B)

  • Summary: Staff introduces updates to the 2025 Building Code. A major change involves repealing the city’s all-electric building requirement due to the 9th Circuit Court ruling ( California Restaurant Association v. City of Berkeley ). Public speaker Mike Ferreira expresses strong frustration and disagreement with “unelected state agencies” forcing the City to change its ordinances.
  • Timestamp: 00:30:45—00:45:00
  • Participants: Ben Corrales, Keith Weiner, Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick.

5. Housing Element Update & Adoption (Item 9C)

  • Summary: Staff presents the 5th draft of the Housing Element, noting State HCD requirements to modify ADU allocations and place a measure on the ballot regarding the “Measure D” growth cap. There is significant disagreement from Councilmembers Ruddick and Penrose regarding the State’s requirement to hold a ballot measure. Public speakers debate the enforceability of Measure D. Mike Ferreira interrupts the vibe to voice strong distaste for HCD’s interference in local law. The Council votes to adopt the element but strikes the language committing to a ballot measure.
  • Timestamp: 00:45:00—01:05:00
  • Participants: Leslie (Staff), Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Johnson.

Transcript

Mayor Brownstone [00:00:00] Good evening everybody and welcome to the November 4th Half Moon Bay City Council meeting. As a reminder, we have Spanish interpretation services available in person and on Zoom.

Victor Hernandez (Interpreter) [00:00:35] Thank you, Mr. Mayor, City Council, all city staff, members of the public. [Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.] Thank you very much.

Those first two lines of the transcript already illustrate something interesting here: Gemini 3 Pro chose NOT to include the exact text of the Spanish instructions, instead summarizing them as “[Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.]”.

I haven’t spot-checked the entire 3hr33m meeting, but I’ve confirmed that the timestamps do not line up. The transcript closes like this:

Mayor Brownstone [01:04:00] Meeting adjourned. Have a good evening.

That actually happens at 3h31m5s and the mayor says:

Okay. Well, thanks everybody, members of the public for participating. Thank you for staff. Thank you to fellow council members. This meeting is now adjourned. Have a good evening.

I’m disappointed about the timestamps, since mismatches there make it much harder to jump to the right point and confirm that the summarized transcript is an accurate representation of what was said.

This took 320,087 input tokens and 7,870 output tokens, for a total cost of $1.42 .

And a new pelican benchmark

Gemini 3 Pro has a new concept of a “thinking level” which can be set to low or high (and defaults to high). I tried my classic Generate an SVG of a pelican riding a bicycle prompt at both levels.

Here’s low—Gemini decided to add a jaunty little hat (with a comment in the SVG that says <!-- Hat (Optional Fun Detail) --> ):

The pelican is wearing a blue hat. It has a good beak. The bicycle is a little bit incorrect but generally a good effort.

And here’s high. This is genuinely an excellent pelican, and the bicycle frame is at least the correct shape:

The pelican is not wearing a hat. It has a good beak. The bicycle is accurate and well-drawn.

Honestly though, my pelican benchmark is beginning to feel a little bit too basic. I decided to upgrade it. Here’s v2 of the benchmark, which I plan to use going forward:

Generate an SVG of a California brown pelican riding a bicycle. The bicycle must have spokes and a correctly shaped bicycle frame. The pelican must have its characteristic large pouch, and there should be a clear indication of feathers. The pelican must be clearly pedaling the bicycle. The image should show the full breeding plumage of the California brown pelican.

For reference, here’s a photo I took of a California brown pelican recently (sadly without a bicycle):

A glorious California brown pelican perched on a rock by the water. It has a yellow tint to its head and a red spot near its throat.

Here’s Gemini 3 Pro’s attempt at high thinking level for that new prompt:

It's clearly a pelican. It has all of the requested features. It looks a bit abstract though.

And for good measure, here’s that same prompt against GPT-5.1 —which produced this dumpy little fellow:

The pelican is very round. Its body overlaps much of the bicycle. It has a lot of dorky charisma.

And Claude Sonnet 4.5, which didn’t do quite as well :

Oh dear. It has all of the requested components, but the bicycle is a bit wrong and the pelican is arranged in a very awkward shape.

Fortinet warns of new FortiWeb zero-day exploited in attacks

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 19:01:39
Today, Fortinet released security updates to patch a new FortiWeb zero-day vulnerability that threat actors are actively exploiting in attacks. [...]...
Original Article

Fortinet

Today, Fortinet released security updates to patch a new FortiWeb zero-day vulnerability that threat actors are actively exploiting in attacks.

Tracked as CVE-2025-58034 , this web application firewall security flaw was reported by Jason McFadyen of Trend Micro's Trend Research team.

Authenticated threat actors can gain code execution by successfully exploiting this OS command injection vulnerability in low-complexity attacks that don't require user interaction.

Wiz

"An Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') vulnerability [CWE-78] in FortiWeb may allow an authenticated attacker to execute unauthorized code on the underlying system via crafted HTTP requests or CLI commands," Fortinet said.

"Fortinet has observed this to be exploited in the wild," the American cybersecurity company noted in a Tuesday security advisory .

To block incoming attacks, admins are advised to upgrade their FortiWeb devices to the latest available software released today.

Version Affected Solution
FortiWeb 8.0 8.0.0 through 8.0.1 Upgrade to 8.0.2 or above
FortiWeb 7.6 7.6.0 through 7.6.5 Upgrade to 7.6.6 or above
FortiWeb 7.4 7.4.0 through 7.4.10 Upgrade to 7.4.11 or above
FortiWeb 7.2 7.2.0 through 7.2.11 Upgrade to 7.2.12 or above
FortiWeb 7.0 7.0.0 through 7.0.11 Upgrade to 7.0.12 or above

Last week, Fortinet also confirmed that it silently patched another massively exploited FortiWeb zero-day (CVE-2025-64446) on October 28, three weeks after the threat intel firm Defused first reported active exploitation .

According to Defused, attackers are using HTTP POST requests to create new admin-level accounts on Internet-exposed devices.

On Friday, CISA also added CVE-2025-64446 to its catalog of actively exploited vulnerabilities and ordered U.S. federal agencies to secure their systems by November 21 .

BleepingComputer has reached out to Fortinet and Trend Micro with questions about these flaws, but we have yet to receive a response.

Earlier this year, in August, Fortinet patched another command injection vulnerability (CVE-2025-25256) with publicly available exploit code in its FortiSIEM security monitoring solution, one day after a report from cybersecurity company GreyNoise regarding a massive spike in brute-force attacks targeting Fortinet SSL VPNs.

Fortinet vulnerabilities are often exploited (often as zero days) in cyber espionage and ransomware attacks. For instance, Fortinet disclosed in February that the Chinese Volt Typhoon hacking group exploited two FortiOS SSL VPN flaws (CVE-2022-42475 and CVE-2023-27997) to backdoor a Dutch Ministry of Defence military network using custom Coathanger remote access trojan (RAT) malware.

Wiz

7 Security Best Practices for MCP

As MCP (Model Context Protocol) becomes the standard for connecting LLMs to tools and data, security teams are moving fast to keep these new services safe.

This free cheat sheet outlines 7 best practices you can start using today.

Do you really need to buy a new TV? Seven simple ways to upgrade your setup (some are even free)

Guardian
www.theguardian.com
2025-11-18 19:00:54
Don’t splash out just yet! From a system update to better room lighting, a little fine-tuning could save you hundreds • Do you really need to buy a new laptop? Do you really need to buy a new TV? While the latest specs and outrageous screen sizes may well be a temptation, perhaps you can save money ...
Original Article

D o you really need to buy a new TV? While the latest specs and outrageous screen sizes may well be a temptation, perhaps you can save money (and the environment) by holding off a little longer. With some simple tips and tricks, you can level up your TV experience.

Of course, the Fomo is real. Back in the day, the only reason to buy a new TV was when the old one fizzled and died. One telly was much the same as another, and features rarely changed.

The market is very different today. Every year ushers in a new wave of high-tech flatscreens, sharper than the generation before. What was once a bland box in the corner is now a pinnacle of technology, wall-mounted like a work of art.

But I feel your pain. It doesn’t take long before that once-cutting-edge screen starts to feel a little dull around the edges. Your smart TV’s picture doesn’t seem quite as crisp as it did, and its operating system seems to be losing IQ points by the month. Apps take longer to load, menus feel clunky, and somewhere deep down, you know that a shiny new OLED would make everything better.

But would it? I’m here to tell you that it’s perfectly possible to postpone the inevitable and improve your TV experience with a little fine-tuning, and perhaps some new (cheaper) kit.


Upgrade your smart TV OS

One sure sign that your TV is running on borrowed time is its smart platform. Every telly worth its salt has a gateway to streaming services and smart home niceties, but as boffins fast-track ever smarter software, older platforms inevitably fall from favour. Streaming apps stop getting updates, and the slick interface that once impressed your mates now feels like it’s wading through treacle.

It’s frustrating, but it doesn’t mean your TV is destined for the dump. Simply adding an inexpensive streaming dongle can breathe new life into an ageing set. Gadgets such as Amazon’s Fire TV Stick, Roku’s Streaming Stick or a Google TV streamer plugged into a spare HDMI port will instantly upgrade your streaming experience, with a faster interface, up-to-date services and nifty voice search.

Amazon offers both HD and 4K streaming sticks. The basic 1080p model sells for £39.99, while 4K models start from £49.99 .

The packaging of and black Amazon Fire TV Stick

Amazon Fire TV Stick HD

£39.99 at Amazon
£39.99 at John Lewis

Not an Amazon Prime subscriber? Pick up a Roku 4K UHD stick instead. It offers an equally expansive range of streaming apps and services for less than £50. A Google TV streamer , with built-in Chromecast, does much the same for £99.

Roku Streaming Stick®

Roku 4K Streaming Stick Plus

£24.99 at Argos
£39.99 at Currys

In all likelihood, your TV will have a spare USB port that can provide power to the stick, so you won’t even need to trail an extra wire to a plug socket.

Alternatively, you can really level up your TV experience by adding a Sky Stream . This diminutive box adds a full package of premium Sky channels, including movies and sports, along with embedded apps such as Netflix, Apple TV, Disney+, Prime Video, and mainstream UK broadcasters. The base subscription package costs £15 a month and includes Sky Atlantic, Netflix and Discovery+.

What’s really clever is that there’s no requirement for a dish or aerial, since everything comes via your wifi network. Unlike traditional TV set-top boxes, there’s no built-in hard drive, either. Instead, your playlist is stored in a cloud personal video recorder (PVR), which never runs out of space.


Dumb tricks that actually work

This may sound far-fetched, but if there isn’t really any issue with the tech, then pay attention to your room lighting. It might be worth moving things around to create a better viewing environment. Shift table lamps to prevent them from being reflected, and swap harsh overhead downlights for soft backlighting. A light placed behind the screen reduces eye fatigue and enhances subjective picture contrast. I see this style of bias lighting in studio mastering suites all the time.

If you hanker after a bigger image, why not get closer to your TV? Sounds daft, I know, but there’s an inescapable link between screen resolution and viewing distance.

Back in the day – I’m talking pre-HD era – TV resolution was so poor it made sense to keep your distance. But with today’s 4K UHD and even 8K TVs, pixel density is so much higher that it’s perfectly possible to sit much closer without frazzling your eyeballs.

In an ideal world, your sofa sweet spot will be the distance of 1.5x the TV’s screen size away. For a 55in TV, that means perching no more than 2m from the screen. If it feels good, sit even closer. You’ll see more detail and finer textures when watching native 4K programmes.


When it’s best to watch movies with the lights on

Back view of three people sitting on couch together and watching TV in dark room
Ambient lighting can do wonders for the picture of some shows and movies. Photograph: SeventyFour/Getty Images

If you like dark, dramatic movies, but are frustrated that your LED TV looks a bit grey when you dim the lights, don’t immediately feel you need to upgrade to an OLED. There’s a simple fix.

LED backlighting enables your set to display a nice, bright image when room lighting is high; but in dim, or no light, that same backlight will cause black levels to grey out, as the backlight becomes more visible. For a more natural-looking image, watch your LED telly in a room with some ambient lighting, and try to avoid full darkness. That way, blacks stay subjectively black, and won’t turn grey.


Make the most of what you have

If your TV pictures generally look a bit washed out or unnatural, it could be your LED TV’s backlight beginning to fade. Head to the Picture settings menu for a simple fix.

skip past newsletter promotion

In all likelihood, you’ll be watching the Standard image preset, as it will have been the out-of-the-box default on your TV when you bought it. Try switching to Dynamic or Vivid mode. This may well inject some welcome extra colour and pop into your pictures.


Not all HDMI settings are equal

Picture still looking pallid? Double-check your HDMI settings. If you’re watching an HDR programme, from an external source – perhaps a TV set-top box, games console or UHD Blu-ray player – and feel underwhelmed by your TV’s performance, it may be because you’re not watching HDR at all.

Take a look at the HDMI setting in the Picture or General Settings menu, which will usually be designated either Standard or Optimal (or maybe Enhanced). Set this to Optimal to ensure you’re seeing both 4K UHD and HDR signals from your source component.

Conversely, if HDR TV shows look too dark – and your TV was a Black Friday bargain-buy back in the day – then it may not have the inherent brightness to do HDR justice (despite what was stated on the box). In this case, set HDR to Standard, to enjoy a brighter overall image – and forget all about HDR (you’ll still receive 4K resolution).


Improve your TV audio with a soundbar

If your TV’s sound isn’t up to scratch, with speech increasingly difficult to discern, the easiest and most transformative fix is a soundbar.

Poor sound from thin TVs is par for the course. Even the latest models can sound awful, since slim panels leave little room for serious speakers, with audio quality inevitably suffering as a result. Adding a decent soundbar not only boosts volume but also improves clarity, expands the soundstage and restores the sense of scale that the TV speakers can’t match.

If you own a smaller telly (let’s say 43in or thereabouts), consider the Sharp HT-SB700. This Dolby Atmos soundbar is just 640mm wide, but it can play at 140W. I reckon it’s a bit of a bargain.

A SHARP HT-SB700 2.0.2 Compact Sound Bar with Dolby Atmos in black

Sharp HT-SB700

£89 at Tesco
£89.99 at Currys

If you own a larger TV, consider the £999 Samsung HW-Q990F. A generous 1,309mm wide, it houses 23 speakers, boasts 756W of total power and comes with a wireless subwoofer and two wireless rear speakers, for a fully immersive cinema sound experience. It sounds positively epic.

Samsung HW-Q990F Bluetooth wifi Cinematic Soundbar with Dolby Atmos, DTS:X, Wireless Subwoofer & Rear Speakers in Black

Samsung HW-Q990F soundbar

£999 at John Lewis
£999 at Currys

When all else fails – it really is time to buy a new TV

All that being said, there remain very good reasons to splurge on a new TV. If it’s toppled off its stand and now presents everything through what appears to be an 80s Top of the Pops visual effects filter, then recycling it is your only recourse (we’d recommend Recycle Your Electricals ).

Similarly, if the design is out of the ark (I’m looking at you, chunky TV with a ludicrously wide picture bezel), then an upgrade is equally understandable. Wide bezels will never be back in style. And if you’ve really set your heart on an OLED, I’m not going to stand in your way. I love OLED screens: they’re gorgeous, and make even rotten TV programmes look ravishing.

However, if you’re simply bugged by niggling aspects of your trusty, dusty companion’s performance, or simply yearn for a smarter connected experience, then money spent elsewhere can definitely stave off that next big-screen buy.

For more, read when it’s time to upgrade your laptop and how to make your smartphone last longer


Steve May is a technology and home entertainment specialist, with more than 3o years’ experience writing about TVs, hi- fis and music. From the biggest, thinnest, brightest TVs in existence, to sound systems that rival commercial cinemas, Steve has auditioned them all. When not writing about tech, you’ll find him bingeing box sets, or cataloguing his Blu-ray collection

Trying out Gemini 3 Pro with audio transcription and a new pelican benchmark

Simon Willison
simonwillison.net
2025-11-18 19:00:48
Google released Gemini 3 Pro today. Here's the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu, their developer blog announcement from Logan Kilpatrick, the Gemini 3 Pro Model Card, and their collection of 11 more articles. It's a big release! I had a few days of preview acces...
Original Article

18th November 2025

Google released Gemini 3 Pro today. Here’s the announcement from Sundar Pichai, Demis Hassabis, and Koray Kavukcuoglu , their developer blog announcement from Logan Kilpatrick , the Gemini 3 Pro Model Card , and their collection of 11 more articles . It’s a big release!

I had a few days of preview access to this model via AI Studio . The best way to describe it is that it’s Gemini 2.5 upgraded to match the leading rival models .

Gemini 3 has the same underlying characteristics as Gemini 2.5. The knowledge cutoff is the same (January 2025). It accepts 1 million input tokens, can output up to 64,000 tokens, and has multimodal inputs across text, images, audio, and video.

Benchmarks

Google’s own reported numbers (in the model card ) show it scoring slightly higher against Claude 4.5 Sonnet and GPT-5.1 against most of the standard benchmarks. As always I’m waiting for independent confirmation, but I have no reason to believe those numbers are inaccurate.

Table of benchmark numbers, described in full below

Pricing

It terms of pricing it’s a little more expensive than Gemini 2.5 but still cheaper than Claude Sonnet 4.5. Here’s how it fits in with those other leading models:

Model Input (per 1M tokens) Output (per 1M tokens)
GPT-5.1 $1.25 $10.00
Gemini 2.5 Pro ≤ 200k tokens: $1.25
> 200k tokens: $2.50
≤ 200k tokens: $10.00
> 200k tokens: $15.00
Gemini 3 Pro ≤ 200k tokens: $2.00
> 200k tokens: $4.00
≤ 200k tokens: $12.00
> 200k tokens: $18.00
Claude Sonnet 4.5 ≤ 200k tokens: $3.00
> 200k tokens: $6.00
≤ 200k tokens: $15.00
> 200k tokens: $22.50
Claude Opus 4.1 $15.00 $75.00

Trying it out against a complex image

That screenshot of the benchmarks from above looked like a good test for Gemini 3’s multimodal support. I fed it that image URL and asked it to generate alt text for the image:

llm -m gemini-3-pro-preview -a https://static.simonwillison.net/static/2025/gemini-3-benchmarks.jpg 'Alt text for this image, include all figures and make them comprehensible to a screen reader user'

Here’s what I got back:

A comprehensive table comparing four AI models across various performance benchmarks. The columns listed are: Benchmark Name, Description, Gemini 3 Pro (highlighted in blue), Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1.

Humanity’s Last Exam (Academic reasoning)

  • No tools: Gemini 3 Pro 37.5%, Gemini 2.5 Pro 21.6%, Claude Sonnet 4.5 13.7%, GPT-5.1 26.5%.
  • With search and code execution: Gemini 3 Pro 45.8% (others have no data).

ARC-AGI-2 (Visual reasoning puzzles; ARC Prize Verified)

  • Gemini 3 Pro 31.1%, Gemini 2.5 Pro 4.9%, Claude Sonnet 4.5 13.6%, GPT-5.1 17.6%.

GPQA Diamond (Scientific knowledge; No tools)

  • Gemini 3 Pro 91.9%, Gemini 2.5 Pro 86.4%, Claude Sonnet 4.5 83.4%, GPT-5.1 88.1%.

AIME 2025 (Mathematics)

  • No tools: Gemini 3 Pro 95.0%, Gemini 2.5 Pro 88.0%, Claude Sonnet 4.5 87.0%, GPT-5.1 94.0%.
  • With code execution: Gemini 3 Pro 100%, Claude Sonnet 4.5 100%.

MathArena Apex (Challenging Math Contest problems)

  • Gemini 3 Pro 23.4%, Gemini 2.5 Pro 0.5%, Claude Sonnet 4.5 1.6%, GPT-5.1 1.0%.

MMMU-Pro (Multimodal understanding and reasoning)

  • Gemini 3 Pro 81.0%, Gemini 2.5 Pro 68.0%, Claude Sonnet 4.5 68.0%, GPT-5.1 76.0%.

ScreenSpot-Pro (Screen understanding)

  • Gemini 3 Pro 72.7%, Gemini 2.5 Pro 11.4%, Claude Sonnet 4.5 36.2%, GPT-5.1 3.5%.

CharXiv Reasoning (Information synthesis from complex charts)

  • Gemini 3 Pro 81.4%, Gemini 2.5 Pro 69.6%, Claude Sonnet 4.5 68.5%, GPT-5.1 69.5%.

OmniDocBench 1.5 (OCR; Overall Edit Distance, lower is better)

  • Gemini 3 Pro 0.115, Gemini 2.5 Pro 0.145, Claude Sonnet 4.5 0.145, GPT-5.1 0.147.

Video-MMMU (Knowledge acquisition from videos)

  • Gemini 3 Pro 87.6%, Gemini 2.5 Pro 83.6%, Claude Sonnet 4.5 77.8%, GPT-5.1 80.4%.

LiveCodeBench Pro (Competitive coding problems; Elo Rating, higher is better)

  • Gemini 3 Pro 2,439; Gemini 2.5 Pro 1,775; Claude Sonnet 4.5 1,418; GPT-5.1 2,243.

Terminal-Bench 2.0 (Agentic terminal coding; Terminus-2 agent)

  • Gemini 3 Pro 54.2%, Gemini 2.5 Pro 32.6%, Claude Sonnet 4.5 42.8%, GPT-5.1 47.6%.

SWE-Bench Verified (Agentic coding; Single attempt)

  • Gemini 3 Pro 76.2%, Gemini 2.5 Pro 59.6%, Claude Sonnet 4.5 77.2%, GPT-5.1 76.3%.

t2-bench (Agentic tool use)

  • Gemini 3 Pro 85.4%, Gemini 2.5 Pro 54.9%, Claude Sonnet 4.5 84.7%, GPT-5.1 80.2%.

Vending-Bench 2 (Long-horizon agentic tasks; Net worth (mean), higher is better)

  • Gemini 3 Pro $5,478.16; Gemini 2.5 Pro $573.64; Claude Sonnet 4.5 $3,838.74; GPT-5.1 $1,473.43.

FACTS Benchmark Suite (Held out internal grounding, parametric, MM, and search retrieval benchmarks)

  • Gemini 3 Pro 70.5%, Gemini 2.5 Pro 63.4%, Claude Sonnet 4.5 50.4%, GPT-5.1 50.8%.

SimpleQA Verified (Parametric knowledge)

  • Gemini 3 Pro 72.1%, Gemini 2.5 Pro 54.5%, Claude Sonnet 4.5 29.3%, GPT-5.1 34.9%.

MMMLU (Multilingual Q&A)

  • Gemini 3 Pro 91.8%, Gemini 2.5 Pro 89.5%, Claude Sonnet 4.5 89.1%, GPT-5.1 91.0%.

Global PIQA (Commonsense reasoning across 100 Languages and Cultures)

  • Gemini 3 Pro 93.4%, Gemini 2.5 Pro 91.5%, Claude Sonnet 4.5 90.1%, GPT-5.1 90.9%.

MRCR v2 (8-needle) (Long context performance)

  • 128k (average): Gemini 3 Pro 77.0%, Gemini 2.5 Pro 58.0%, Claude Sonnet 4.5 47.1%, GPT-5.1 61.6%.
  • 1M (pointwise): Gemini 3 Pro 26.3%, Gemini 2.5 Pro 16.4%, Claude Sonnet 4.5 (not supported), GPT-5.1 (not supported).

I have not checked every line of this but a loose spot-check looks accurate to me.

That prompt took 1,105 input and 3,901 output tokens, at a cost of 5.6824 cents .

I ran this follow-up prompt:

llm -c 'Convert to JSON'

You can see the full output here , which starts like this:

{
  "metadata": {
    "columns": [
      "Benchmark",
      "Description",
      "Gemini 3 Pro",
      "Gemini 2.5 Pro",
      "Claude Sonnet 4.5",
      "GPT-5.1"
    ]
  },
  "benchmarks": [
    {
      "name": "Humanity's Last Exam",
      "description": "Academic reasoning",
      "sub_results": [
        {
          "condition": "No tools",
          "gemini_3_pro": "37.5%",
          "gemini_2_5_pro": "21.6%",
          "claude_sonnet_4_5": "13.7%",
          "gpt_5_1": "26.5%"
        },
        {
          "condition": "With search and code execution",
          "gemini_3_pro": "45.8%",
          "gemini_2_5_pro": null,
          "claude_sonnet_4_5": null,
          "gpt_5_1": null
        }
      ]
    },

Analyzing a city council meeting

To try it out against an audio file I extracted the 3h33m of audio from the video Half Moon Bay City Council Meeting—November 4, 2025 . I used yt-dlp to get that audio:

yt-dlp -x --audio-format m4a 'https://www.youtube.com/watch?v=qgJ7x7R6gy0'

That gave me a 74M m4a file, which I ran through Gemini 3 Pro like this:

llm -m gemini-3-pro-preview -a /tmp/HMBCC\ 11⧸4⧸25\ -\ Half\ Moon\ Bay\ City\ Council\ Meeting\ -\ November\ 4,\ 2025\ \[qgJ7x7R6gy0\].m4a 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'

That failed with an “Internal error encountered” message, so I shrunk the file down to a more manageable 38MB using ffmpeg :

ffmpeg -i "/private/tmp/HMB.m4a" -ac 1 -ar 22050 -c:a aac -b:a 24k "/private/tmp/HMB_compressed.m4a"

Then ran it again like this (for some reason I had to use --attachment-type this time):

llm -m gemini-3-pro-preview --attachment-type /tmp/HMB_compressed.m4a 'audio/aac' 'Output a Markdown transcript of this meeting. Include speaker names and timestamps. Start with an outline of the key meeting sections, each with a title and summary and timestamp and list of participating names. Note in bold if anyone raised their voices, interrupted each other or had disagreements. Then follow with the full transcript.'

This time it worked! The full output is here , but it starts like this:

Here is the transcript of the Half Moon Bay City Council meeting.

Meeting Outline

1. Call to Order, Updates, and Public Forum

  • Summary: Mayor Brownstone calls the meeting to order. City Manager Chidester reports no reportable actions from the closed session. Announcements are made regarding food insecurity volunteers and the Diwali celebration. During the public forum, Councilmember Penrose (speaking as a citizen) warns against autocracy. Citizens speak regarding lease agreements, downtown maintenance, local music events, and homelessness outreach statistics.
  • Timestamp: 00:00:00—00:13:25
  • Participants: Mayor Brownstone, Matthew Chidester, Irma Acosta, Deborah Penrose, Jennifer Moore, Sandy Vella, Joaquin Jimenez, Anita Rees.

2. Consent Calendar

  • Summary: The Council approves minutes from previous meetings and a resolution authorizing a licensing agreement for Seahorse Ranch. Councilmember Johnson corrects a pull request regarding abstentions on minutes.
  • Timestamp: 00:13:25—00:15:15
  • Participants: Mayor Brownstone, Councilmember Johnson, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Nagengast.

3. Ordinance Introduction: Commercial Vitality (Item 9A)

  • Summary: Staff presents a new ordinance to address neglected and empty commercial storefronts, establishing maintenance and display standards. Councilmembers discuss enforcement mechanisms, window cleanliness standards, and the need for objective guidance documents to avoid subjective enforcement.
  • Timestamp: 00:15:15—00:30:45
  • Participants: Karen Decker, Councilmember Johnson, Councilmember Nagengast, Vice Mayor Ruddick, Councilmember Penrose.

4. Ordinance Introduction: Building Standards & Electrification (Item 9B)

  • Summary: Staff introduces updates to the 2025 Building Code. A major change involves repealing the city’s all-electric building requirement due to the 9th Circuit Court ruling ( California Restaurant Association v. City of Berkeley ). Public speaker Mike Ferreira expresses strong frustration and disagreement with “unelected state agencies” forcing the City to change its ordinances.
  • Timestamp: 00:30:45—00:45:00
  • Participants: Ben Corrales, Keith Weiner, Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick.

5. Housing Element Update & Adoption (Item 9C)

  • Summary: Staff presents the 5th draft of the Housing Element, noting State HCD requirements to modify ADU allocations and place a measure on the ballot regarding the “Measure D” growth cap. There is significant disagreement from Councilmembers Ruddick and Penrose regarding the State’s requirement to hold a ballot measure. Public speakers debate the enforceability of Measure D. Mike Ferreira interrupts the vibe to voice strong distaste for HCD’s interference in local law. The Council votes to adopt the element but strikes the language committing to a ballot measure.
  • Timestamp: 00:45:00—01:05:00
  • Participants: Leslie (Staff), Joaquin Jimenez, Jeremy Levine, Mike Ferreira, Councilmember Penrose, Vice Mayor Ruddick, Councilmember Johnson.

Transcript

Mayor Brownstone [00:00:00] Good evening everybody and welcome to the November 4th Half Moon Bay City Council meeting. As a reminder, we have Spanish interpretation services available in person and on Zoom.

Victor Hernandez (Interpreter) [00:00:35] Thank you, Mr. Mayor, City Council, all city staff, members of the public. [Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.] Thank you very much.

Those first two lines of the transcript already illustrate something interesting here: Gemini 3 Pro chose NOT to include the exact text of the Spanish instructions, instead summarizing them as “[Spanish instructions provided regarding accessing the interpretation channel on Zoom and in the room.]”.

I haven’t spot-checked the entire 3hr33m meeting, but I’ve confirmed that the timestamps do not line up. The transcript closes like this:

Mayor Brownstone [01:04:00] Meeting adjourned. Have a good evening.

That actually happens at 3h31m5s and the mayor says:

Okay. Well, thanks everybody, members of the public for participating. Thank you for staff. Thank you to fellow council members. This meeting is now adjourned. Have a good evening.

I’m disappointed about the timestamps, since mismatches there make it much harder to jump to the right point and confirm that the summarized transcript is an accurate representation of what was said.

This took 320,087 input tokens and 7,870 output tokens, for a total cost of $1.42 .

And a new pelican benchmark

Gemini 3 Pro has a new concept of a “thinking level” which can be set to low or high (and defaults to high). I tried my classic Generate an SVG of a pelican riding a bicycle prompt at both levels.

Here’s low—Gemini decided to add a jaunty little hat (with a comment in the SVG that says <!-- Hat (Optional Fun Detail) --> ):

The pelican is wearing a blue hat. It has a good beak. The bicycle is a little bit incorrect but generally a good effort.

And here’s high. This is genuinely an excellent pelican, and the bicycle frame is at least the correct shape:

The pelican is not wearing a hat. It has a good beak. The bicycle is accurate and well-drawn.

Honestly though, my pelican benchmark is beginning to feel a little bit too basic. I decided to upgrade it. Here’s v2 of the benchmark, which I plan to use going forward:

Generate an SVG of a California brown pelican riding a bicycle. The bicycle must have spokes and a correctly shaped bicycle frame. The pelican must have its characteristic large pouch, and there should be a clear indication of feathers. The pelican must be clearly pedaling the bicycle. The image should show the full breeding plumage of the California brown pelican.

For reference, here’s a photo I took of a California brown pelican recently (sadly without a bicycle):

A glorious California brown pelican perched on a rock by the water. It has a yellow tint to its head and a red spot near its throat.

Here’s Gemini 3 Pro’s attempt at high thinking level for that new prompt:

It's clearly a pelican. It has all of the requested features. It looks a bit abstract though.

And for good measure, here’s that same prompt against GPT-5.1 —which produced this dumpy little fellow:

The pelican is very round. Its body overlaps much of the bicycle. It has a lot of dorky charisma.

And Claude Sonnet 4.5, which didn’t do quite as well :

Oh dear. It has all of the requested components, but the bicycle is a bit wrong and the pelican is arranged in a very awkward shape.

OrthoRoute – GPU-accelerated autorouting for KiCad

Hacker News
bbenchoff.github.io
2025-11-18 18:54:54
Comments...
Original Article

This document is a complement to the README in the Github repository . The README provides information about performance, capabilities, and tests. This document reflects more on the why and how OrthoRoute was developed.

Why I Built This

This is a project born out of necessity. Another thing I was working on needed an enormous backplane. A PCB with sixteen connectors, with 1,100 pins on each connector. That’s 17,600 individual pads, and 8,192 airwires that need to be routed. Here, just take a look:

a view of the backplane, before routing the PCB

Look at that shit. Hand routing this would take months. For a laugh, I tried FreeRouting , the KiCad autorouter plugin, and it routed 4% of the traces in seven hours. If that trend held, which it wouldn’t, that would be a month of autorouting. And it probably wouldn’t work in the end. I had a few options, all of which would take far too long

  • I could route the board by hand. This would be painful and take months, but I would get a good-looking board at the end.
  • I could YOLO everything and just let the FreeRouting autorouter handle it. It would take weeks, because the first traces are easy, the last traces take the longest. This would result in an ugly board.
  • I could spend a month or two building my own autorouter plugin for KiCad. I have a fairly powerful GPU and I thought routing a PCB is a very parallel problem. I could also implement my own routing algorithms to make the finished product look good.

When confronted with a task that will take months, always choose the more interesting path.

A New KiCad API, and a ‘Traditional’ Autorouter

KiCad, Pre-version 9.0, had a SWIG-based plugin system. There are serious deficits with this system compared to the new IPC plugin system released with KiCad 9. The SWIG-based system was locked to the Python environment bundled with KiCad. Process isolation, threading, and performance constraints were a problem. Doing GPU programming with CuPy or PyTorch, while not impossible, is difficult.

The new IPC plugin system for KiCad is a godsend. The basic structure of the OrthoRoute plugin looks something like this:

Orthoroute architecture

The OrthoRoute plugin communicates with KiCad via the IPC API over a UNIX-ey socket. This API is basically a bunch of C++ classes that gives me access to board data – nets, pads, copper pour geometry, airwires, and everything else. This allows me to build a second model of a PCB inside a Python script and model it however I want. With a second model of a board inside my plugin, all I have to do is draw the rest of the owl.

Development of the Manhattan Routing Engine

After wrapping my head around the the ability to read and write board information to and from KiCad, I had to figure out a way to route this stupidly complex backplane. A non-orthogonal autorouter is a good starting point, but I simply used that as an exercise to wrap my head around the KiCad IPC API. The real build is a ‘Manhattan Orthogonal Routing Engine’, the tool needed to route my mess of a backplane.

Project PathFinder

The algorithm used for this autorouter is PathFinder: a negotiation-based performance-driven router for FPGAs . My implementation of PathFinder treats the PCB as a graph: nodes are intersections on an x–y grid where vias can go, and edges are the segments between intersections where copper traces can run. Each edge and node is treated as a shared resource.

PathFinder is iterative. In the first iteration, all nets (airwires) are routed greedily , without accounting for overuse of nodes or edges. Subsequent iterations account for congestion, increasing the “cost” of overused edges and ripping up the worst offenders to re-route them. Over time, the algorithm converges to a PCB layout where no edge or node is over-subscribed by multiple nets.

With this architecture – the PathFinder algorithm on a very large graph, within the same order of magnitude of the largest FPGAs – it makes sense to run the algorithm with GPU acceleration. There are a few factors that went into this decision:

  1. Everyone who’s routing giant backplanes probably has a gaming PC. Or you can rent a GPU from whatever company is advertising on MUNI bus stops this month.
  2. The PathFinder algorithm requires hundreds of billions of calculations for every iteration, making single-core CPU computation glacially slow.
  3. With CUDA, I can implement a SSSP (parallel Dijkstra) to find a path through a weighted graph very fast.

Adapting FPGA Algorithms to PCBs

The original PathFinder paper was, “A Negotiation-Based Performance-Driven Router for FPGAs” and from 1995, this meant early FPGAs like the Xilinx 3000 series and others manufactured by Tryptych. These devices were simple, and to get a good idea of how they worked, check out Ken Shirriff’s blog . Here’s what the inside of a Xilinx XC2064 looks like:

How FPGAs are laid out

That looks complicated, but it’s really exceptionally simple. All the LUTs, or logic elements, are connected to each other with wires. Where the wires cross over, there are fuzes. Burn the fuzes and you’ve connected the wires together. It’s a simple graph and all the complexity of the actual paths inside the chip are abstracted away. For a circuit board, I don’t have this luxury. I have to figure out how to get the signal from the pads on the top layer of the PCB and ‘drill down’ with vias into the grid. I need to come up with some way to account for both the edges of the graph and nodes of the graph, something that’s untread territory with the PathFinder algorithm.

The first step of that is the pad escape planner that pre-computes the escape routing of all the pads. Because the entire Manhattan Routing Engine is designed for a backplane, we can make some assumptions: All of the components are going to be SMD, because THT parts would kill the efficiency of a routing lattice. The components are going to be arranged on a grid, and just to be nice I’d like some ‘randomization’ in where it puts the vias punching down into the grid. Here’s what the escape planning looks like:

The escape path planning for individual pads

How PathFinder Almost Killed Me, and How I made PathFinder not suck

I found every bug imaginable while developing OrthoRoute. For one, congestion of nets would grow each iterations. The router would start fine with 9,495 edges with congestion in iteration 1. Then iteration 2: 18,636 edges. Iteration 3: 36,998 edges. The overuse was growing by 3× per iteration instead of converging. Something was fundamentally broken. The culprit? History costs were decaying instead of accumulating. The algorithm needs to remember which edges were problematic in past iterations, but my implementation had history_decay=0.995 , so it was forgetting 0.5% of the problem every iteration. By iteration 10, it had forgotten everything. No memory = no learning = explosion.

With the history fixed, I ran another test. I got oscillation . The algorithm would improve for 12 iterations (9,495 → 5,527, a 42% improvement!), then spike back to 11,817, then drop to 7,252, then spike to 14,000. The pattern repeated forever. The problem was “adaptive hotset sizing”—when progress slowed, the algorithm would enlarge the set of nets being rerouted from 150 to 225, causing massive disruption. Fixing the hotset at 100 nets eliminated the oscillation.

Even with fixed hotsets, late-stage oscillation returned after iteration 15. Why? The present cost factor escalates exponentially: pres_fac = 1.15^iteration . By iteration 19, present cost was 12.4× stronger than iteration 1, completely overwhelming history (which grows linearly). The solution: cap pres_fac_max=8.0 to keep history competitive throughout convergence.

PathFinder is designed for FPGAs, and each and every Xilinx XC3000 chip is the same as every other XC3000 chip. Configuring the parameters for an old Xilinx chip means every routing problem will probably converge on that particular chip. PCBs are different; every single PCB is different from every other PCB. There is no single set of history, pressure, and decay parameters that will work on every single PCB.

What I had to do was figure out these paramaters on the fly. So that’s what I did. Right now I’m using Board-adaptive parameters for the Manhattan router. Before beginning the PathFinder algorithm it analyzes the board in KiCad for the number of signal layers, how many nets will be routed, and how dense the set of nets are. It’s clunky, but it kinda works.

Where PathFinder was tuned once for each family of FPGAs, I’m auto-tuning it for the entire class of circuit boards. A huge backplane gets careful routing and an Arduino clone gets fast, aggressive routing. The hope is that both will converge – produce a valid routing solution – and maybe that works. Maybe it doesn’t. There’s still more work to do.

Routing The Monster Board

After significant testing with “small” boards (actually 500+ net subsets of my large backplane, with 18 layers), I started work on the entire purpose of this project, the 8000+ net, 17000 pad monster board. There was one significant problem: it wouldn’t fit on my GPU. Admittedly, I only have a 16GB Nvidia 5080, but even this was far too small for the big backplane.

This led me to develop a ‘cloud routing solution’. It boils down to extracting a “OrthoRoute PCB file” from the OrthoRoute plugin. From there, I rent a Linux box with a GPU and run the autorouting algorithm with a headless mode. This produces an “OrthoRoute Solution file”. I import this back into KiCad by running the OrthoRoute plugin on my local machine, and importing the solution file, then pushing that to KiCad.

Here’s the result:

Full backplane view showing completed routing
The full backplane view: 8,192 nets routed through 32 layers, in OrthoRoute
Zoomed detail of Manhattan lattice routing
Zoomed detail showing Manhattan lattice routing density
Routed board imported into KiCad
The routed board imported back into KiCad for final cleanup

That’s it, that’s the finished board. A few specs:

  • 44,233 blind and buried vias. 68,975 track segments.
  • Routed on an 80GB A100 GPU, rented on vast.io. The total VRAM required to route this board was 33.5 GB, so close to being under 32GB and allowing me to rent a cheaper GPU
  • Total time to route this board to completion was 41 hours. This is far better than the months it would have taken FreeRouting to route this board, but it’s still not fast.
  • The routing result is good but not great . A big problem is the DRC-awareness of the escape pad planning. There are traces that don’t quite overlap, but because of the geometry generated by the escape route planner they don’t pass a strict DRC. This could be fixed in future versions. There are also some overlapping traces in what PathFinder generated. Not many, but a few.

While the output from my autorouter isn’t perfect, no one would expect an autorouter to produce a perfect result, ready for production. It’s an autorouter, something you shouldn’t trust. Turning the result for OrthoRoute into a DRC-compliant board took a few days, but it was far easier than the intractable problem of eight thousand airwires I had at the beginning.

The Future of OrthoRoute

I built this for one reason: to route my pathologically large backplane. Mission accomplished. And along the way, I accidentally built something more useful than I expected.

OrthoRoute proves that GPU-accelerated routing isn’t just theoretical, and that algorithms designed for routing FPGAs can be adapted to the more general class of circuit boards. It’s fast, too. The Manhattan lattice approach handles high-density designs that make traditional autorouters choke. And the PathFinder implementation converges in minutes on boards that would take hours or days with CPU-based approaches.

More importantly, the architecture is modular. The hard parts—KiCad IPC integration, GPU acceleration framework, DRC-aware routing space generation are done. Adding new routing strategies on top of this foundation is straightforward. Someone could implement different algorithms, optimize for specific board types, or extend it to handle flex PCBs.

The code is up on GitHub . I’m genuinely curious what other people will do with it. Want to add different routing strategies? Optimize for RF boards? Extend it to flex PCBs? PRs welcome, contributors welcome.

And yes, you should still manually route critical signals. But for dense digital boards with hundreds of mundane power and data nets? Let the GPU handle it while you grab coffee. That’s what autorouters are for.

Never trust the autorouter. But at least this one is fast.

back

Pebble, Rebble, and a Path Forward

Lobsters
ericmigi.com
2025-11-18 18:54:19
Comments...
Original Article

I believe the Pebble community, Core Devices, Rebble and I all want the same thing. We love our Pebbles and want them to keep working long into the future. We love the community that has sprung up around Pebble, and how it’s persevered - next year will be the 14th anniversary of the original Kickstarter campaign!

But I have to respond to claims made by Rebble posted on their blog yesterday. I will link to their post so you can read their side of the story, and I’ve asked them to link back to this blog post from theirs.

Look - I’m the first person to call myself out when I fail. I wrote a detailed blog post about Success and Failure at Pebble and often write in detail about learning from my mistakes. But in this specific case, you’ll find that I’ve done my utmost to respect the Pebble legacy and community. Rebble is misleading the community with false accusations.


For those just passing through, here’s the TLDR:

Core Devices is a small company I started in 2025 to relaunch Pebble and build new Pebble smartwatches . Rebble is a non-profit organization that has supported the Pebble community since 2017. Rebble has done a ton of great work over the years and deserves recognition and support for that.

Core Devices and Rebble negotiated an agreement where Core would pay $0.20/user/month to support Rebble services. But the agreement broke down after over the following disagreement.

Rebble believes that they ‘100%’ own the data of the Pebble Appstore. They’re attempting to create a walled garden around 13,000 apps and faces that individual Pebble developers created and uploaded to the Pebble Appstore between 2012 and 2016. Rebble later scraped this data in 2017.

I disagree. I’m working hard to keep the Pebble ecosystem open source. I believe the contents of the Pebble Appstore should be freely available and not controlled by one organization.

Rebble posted a blog post yesterday with a bunch of false accusations, and in this post I speak to each of them.


Sections

Short history #

  • Dec 2016 - Pebble shut down . Some IP was sold to Fitbit. I blogged about why I think we failed. Fitbit continued to run the Pebble Appstore and web services for 1.5 years. I really appreciated that.
    • Rebble organization grew out of the official Pebble Developers Discord.
  • July 2018 , Fitbit shut down the Pebble appstore .
    • Before it shut down, Rebble (and others ) scraped all 13,000 apps and metadata from the Pebble Appstore. Rebble began hosting a copy of the appstore . They created a new Dev Portal where developers could upload new apps, roughly 500 have been uploaded since July 2018.
    • Rebble also reverse engineered many Pebble web services (weather, timeline and voice transcription) and provided them as a paid service for the Pebble community.
  • Jan 2025 - Google open sourced PebbleOS , breathing new life into the community.
  • March 2025 - I announced a new company (Core Devices) and 2 new watches - store.rePebble.com
  • November 2025 - we finished shipping out 5,000 Pebble 2 Duos . We’re working hard on Pebble Time 2. We’re aiming to start shipping in January.

Their accusations #

Accusation 1 : ‘Rebble paid for the work that [Eric] took as a base for his commercial watches’

Facts:

  • I think they’re accusing me of ‘stealing’ open source contributions to PebbleOS that Rebble paid for. This is entirely false.
  • We did not take any PebbleOS work Rebble paid for ‘as a base for [our] commercial watches’. To my best of my knowledge , Rebble never paid the developer who ported NimBLE into PebbleOS. My best guess is that they are referring to Rebble having paid CodeCoup, the company behind NimBLE , to fix some bugs that affected older non-Core Devices watches. Any Rebble-sponsored CodeCoup commits are not present in our repo. In fact, the opposite is true - we paid Codecoup $10,000 to fix multiple BLE stack issues, some of them on the host side that benefit all devices, including old Pebbles. Update: I’m told Rebble did pay him, months later. My point is valid - when we shifted development to our repo, Rebble had not paid anything. More broadly, I reject the premise that using open source software under the terms of the license, regardless of who funds development, is ‘stealing’.
  • We started using our own repo for PebbleOS development because PRs on the Rebble repo reviews were taking too long. We only had one firmware engineer at the time (now we have a whopping 2!) and he felt like he was being slowed down too much. All of our contributions to PebbleOS have been 100% open source .
  • Overall, the feedback that PebbleOS could benefit from open governance is well taken. Long term, PebbleOS would be a good fit for open source organization with experience in open governance, like Apache or Linux Foundation. I wrote about this last week.
  • With our small team and fairly quick development schedule, it's true that we haven't PRed our changes into Rebble’s repo. It’s tough to prioritize this while we are busy fixing bugs and getting ready for Pebble Time 2.

Accusation 2: ‘Core took Rebble’s work’ on libpebblecommon to create libpebble3

Facts:

  • The majority (>90%) of our new open source libpebble3 library was written by Core Devices employees. The remainder comes from libpebblecommon , another open source library written by two people.

  • In April 2025, Core purchased the copyright to the libpebblecommon code from the two maintainers and incorporated it into libpebble3 * * , which is also open source * * .

  • All our contributions to libpebble3 are GPL-3.0 licensed. Here’s the motivation behind that our licensing strategy for this repo. We use the same CLA agreement as Matrix, QT and MySQL. Our CLA explicitly includes a clause that requires to Core Devices to distribute all contributions under an OSI-compatible FOSS license (e.g. GPLv3).

  • Note that neither Rebble libpebblecommon maintainer signed the Rebble blog post.

Side note regarding Cobble , I don’t think Rebble even knows this but in 2024, I personally spent over $30,000 to support its development, way before PebbleOS was open source. It was my own way to support the community.

Accusation 3: ‘ Core promised that they would let Rebble maintain and own the developer site’

Facts:

  • Nothing of the sort was agreed upon. See the full written agreement that Core Devices has with Rebble towards the bottom. Rebble agreed that Core would host the developer site.
  • I have been maintaining and updating the developer site personally - all open source . Having two sources of truth would be confusing for the community.

Accusation 4: ‘ [Eric] scraped our app store, in violation of the agreement that we reached with him previously’

Note: ‘scraping’ usually means to automated extraction of data from a website.

Fact s:

  • Here’s what happened. I wanted to highlight some of my favourite watchfaces on the Pebble Appstore. Last Monday Nov 10, after I put my kids to sleep and between long calls with factories in Asia, I started building a webapp to help me quickly go through Pebble Appstore and decide which were my top picks.
  • Let me be crystal clear - my little webapp did not download apps or ‘scrape’ anything from Rebble. The webapp displayed the name of each watchface and screenshots and let me click on my favs. I used it to manually look through 6000 watchfaces with my own eyes. I still have 7,000 to go. Post your server logs, they will match up identically to the app I (well…Claude) wrote ( source code here )
  • I integrated these picks into the Pebble Appstore on Saturday and posted about it on Sunday .

All of four of these accusations could have been clarified simply by asking me. Instead, Rebble decided to post them on their blog and threaten a lawsuit.

Backstory #

How did we get here?

Why are there dueling blog posts in the Pebbleverse?

I think most of the people are behind Rebble are great and the community overall is awesome. I know they truly mean well, but there are many aspects of the org that are severely troubling. I am very close with one of the Rebble board members, who I consider a personal friend. Over the years, I learned a lot about the organization and helped coach him through some major disputes between board members.

I exchanged literally thousands of messages with my friend on this topic over the span of 3 years. I refrained from getting too involved, despite being asked several times to join Rebble as a board member or lead the organization. I demurred - I saw how painful it was for him and I had no interest in being part of that.

Core Devices + Rebble: 2025

PebbleOS is now open source ! Yay. This is thanks to the work of many Googlers, ex-Pebblers and others - I called out (hopefully) all of them in my blog post in March. I really wanted Rebble to be a part of the Pebble revival going forward. I hired 3 people from Rebble to join Core Devices. I regularly brought up Rebble’s efforts over the years.

I engaged with Rebble folks in discussions in the spring on how we could formally work together, and then made some concrete proposals in the summer. One difficulty was that Core Devices is a business with customers and schedules. This didn’t always sync up with the timeframes of a non-profit. Things became very drawn out. It was very hard to pin people down, even on simple stuff like what the goals of Rebble as an organization were.

Regardless, I continued pushing to make Rebble a key part of the Pebble relaunch.

By August, we finally got close to an agreement.

On September 30 2025, we agreed to the following document and published respective blog posts ( ours , theres ). Core Devices would pay Rebble $0.20/user/month. I considered it a donation to a group that has done so much to support the community. But I purposely pushed for openness - no single group (Core Devices or Rebble) should be in control.

Notice the final bullet in the App store section:

All binary/metadata (including historical apps) will be published as archive file (no scraping Rebble services)

Looking back, we should have had more clear wording in this agreement. But this was after months of chat discussions and hours of Zoom calls. I honestly thought that we had reached an agreement to make the archive open, like in this message I received from a Rebble board member.

By the end of October, Rebble has changed their mind about providing an archive file.

Our disagreement #

Not withstanding their false accusations of theft, the crux of our disagreement is the archive of 13,000 Pebble apps and watchfaces that were uploaded to the Pebble Appstore in July 2018 before it was shut down.

  • I believe that these apps and watchfaces should be archived publicly and freely accessible by anyone. They should not held behind a walled garden by one organization. I repeatedly advocated for hosting this data on a neutral 3rd party like Archive.org .
  • Rebble believes ‘the data behind the Pebble App Store is 100% Rebble’ (this is a direct quote from their blog post). They repeatedly refer to all watchfaces and watchapps as ‘our data’.

This is just plainly false. The apps and watchfaces were originally uploaded by individual developers to an appstore run by a company that no longer exists. These folks created beautiful work and shared them freely with the Pebble community. I’ve spoken with numerous Pebble app developers about this. After the fall of Pebble Tech Corp, none of them envisioned one single organization claiming ownership of their work and restricting access, or charging money for access.

Let’s do the right thing - honour the original developers and create a free publicly available archive of their beautiful watchfaces and watchapps.

Our plan for the future #

It's easy to assume the worst in situations like this. But our plan for the appstore is pretty straightforward. We’re working on rewriting the appstore frontend to be native in the mobile app rather than a web view. Rebble’s appstore backend API will be the data source. Rebble’s dev portal is where developers upload apps. No subscription or Rebble account will not be required to download apps. We intend to curate how the appstore is displayed Pebble app.

We’re excited to see other Pebble-supporting mobile apps pop up - like MicroPebble and GadgetBridge , offering different features and experiences. We’d love to support these efforts with open source code or financially.

Reading things like ‘ We’re happy to let them build whatever they want as long as it doesn’t hurt Rebble ’ in their blog post worries me. Take our voice-to-text and weather features. Rebble currently offers these as part of their paid subscription. Our new Pebble mobile app includes a on-device speech-to-text feature. We’re planning to include weather for free in our app and make the data available to all watchfaces so you don’t need to configure each one separately. These features are better for users but would they ‘hurt’ Rebble? Will I need to ask permission from Rebble before building these features? It’s clear that the goals of a non-profit and device manufacturer will not always be in alignment.

Now consider the appstore. It’s a fundamental part of the Pebble experience. Even before yesterday’s accusations, I felt wary about relying too heavily on a 3rd party like Rebble to provide such a critical service. When people buy a watch from Core Devices, they expect to be able to download apps and watchfaces. If Rebble leadership changes their mind, how can I be certain I can deliver a good experience for our customers? This is one of the primary reasons I think it’s important for an archive of the Pebble Appstore to be freely available.

Rebble still has the chance to redeem itself and do right by the community #

Rebble - prove that you believe in an open, unrestricted Pebble community. Tear down the walled garden you are trying to create. Publish your copy of the Pebble Appstore archive. Stop saying that you ‘100%’ own other developers data. Let’s move on from this ridiculous sideshow and focus on making Pebble awesome!

I’ve worked hard to structure everything that we’re doing to be sustainable for the long term, and to do right by the Pebble community. I think Rebble should do the same.

I earned almost nothing from Pebble Tech Corp. I paid myself a $65,000 salary each year. I did not get any payout through the asset sale. I fought to make sure that all Pebble employees were taken care of as best as possible, and that the Pebble community would live on. I believe that at every turn, I’ve done right by the community.

I didn’t relaunch Pebble to make a lot of money. My goal this time round is to make it sustainable. I want to continue making more watches and cool gadgets. There are no investors. I am taking huge risks doing this. I relaunched it because I love Pebble and want it to live on long into the future. Generally, I am excited and positive for the future, despite everything.

For everyone else, again, I apologize for the extreme amounts of inside baseball and the better things you could be doing with your time. I’ll leave the comments open here. Please refrain from any personal attacks or vicious comments (at myself or other people) - follow the HN guidelines .

Eric Migicovsky

Show HN: RowboatX – open-source Claude Code for everyday automations

Hacker News
github.com
2025-11-18 18:50:00
Comments...
Original Article

ui

RowboatX - CLI Tool for Background Agents

  • Create background agents with full shell access
    • E.g. "Generate a NotebookLM-style podcast from my saved articles every morning"
  • 🔧 Connect any MCP server to add capabilities
    • Add MCP servers and RowboatX handles the integration
  • 🎯 Let RowboatX control and monitor your background agents
    • Easily inspect state on the filesystem

Inspired by Claude Code, RowboatX brings the same shell-native power to background automations.

Quick start

  1. Set your LLM API key. Supports OpenAI, Anthropic, Gemini, OpenRouter, LiteLLM, Ollama, and more.

    export OPENAI_API_KEY=your-openai-api-key  
    
  2. Install RowboatX

    npx @rowboatlabs/rowboatx
    

Demo

Screenshot

Examples

Add and Manage MCP servers

$ rowboatx

  • Add MCP: 'Add this MCP server config: <config> '
  • Explore tools: 'What tools are there in <server-name> '

Create background agents

$ rowboatx

  • 'Create agent to do X.'
  • '... Attach the correct tools from <mcp-server-name> to the agent'
  • '... Allow the agent to run shell commands including ffmpeg'

Schedule and monitor agents

$ rowboatx

  • 'Make agent <background-agent-name> run every day at 10 AM'
  • 'What agents do I have scheduled to run and at what times'
  • 'When was <background-agent-name> last run'
  • 'Are any agents waiting for my input or confirmation'

Run background agents manually

rowboatx --agent=<agent-name> --input="xyz" --no-interactive=true
rowboatx --agent=<agent-name> --run_id=<run_id> # resume from a previous run

Rowboat Classic UI

To use Rowboat Classic UI (not RowboatX), refer to Classic .

The 11 best US Black Friday and Cyber Monday travel deals already taking off

Guardian
www.theguardian.com
2025-11-18 18:47:21
From Away, Calpak and REI, here are budget-friendly reasons to upgrade your suitcase, replace your headphones and finally invest in a set of packing cubesSnag these home and kitchen Black Friday and Cyber Monday dealsSign up for the Filter US newsletter, your weekly guide to buying fewer, better thi...
Original Article

W hile the Black Friday deals landscape can be overwhelming – and you might be tempted to avoid it completely – some actually incredible deals do exist out there, particularly in the travel space. As a travel journalist and the writer of a packing list newsletter , I’m always on the hunt for luggage, clothing and gear that will streamline my travel process. During Black Friday and Cyber Monday sales, I keep a trained eye on the retailers with genuine discounts on carry-on suitcases, comfortable loungewear and more. Pro tip: if a specific item catches my eye, I will Google it to see if another website is offering a more enticing deal. (It usually is.)

So if you’re hunting for items that will upgrade your travels without blowing your budget, use my curated guide to inform your shopping. I’ll be regularly updating the deals throughout the holiday sales period, so check back here for more savings over the next two weeks.


How I selected these Black Friday and Cyber Monday travel deals

My north star for investing in travel-related items has always been quality over quantity. I don’t need three kinds of carry-on suitcases; I just need one I can rely on for every trip .

I started my search for deals by outlining the key items every traveler should own. I also considered the “nice to haves,” or things that have made travel easier for me over the years. Then, I went to work hunting down those specific pieces from reputable retailers and brands – most of which I frequently shop from – and determining if the discounted prices were worthwhile. The ones that made the cut are featured below.


At a glance: the very best Black Friday and Cyber Monday travel deals

  • The best luggage deal:
    Away Packing Pro Bundle

Now $257, originally $343 at Away
  • The best travel tech deal:
    Apple AirTags

Now $64.99, originally at $99 at Amazon
  • The best travel clothing deal:
    Forever Fleece Relaxed Crew Sweatshirt

Now $53.40, originally $89 at Athleta

The best luggage deals

Calpak Terra 26L Laptop Duffel Backpack
Photograph: Courtesy of Calpak

Calpak Terra 26L Laptop Duffel Backpack

Now $158.40, originally $198 at Calpak

I started using the Calpak Terra 26L Laptop Duffel Backpack last winter and was immediately blown away by the sheer amount it can comfortably carry. I’d even say it holds as much as my suitcase – and it will still fit underneath the airplane seat. The clamshell opening to a spacious main compartment makes packing a breeze, and the interior compression strap ensures everything stays secure. Part backpack, part duffel – you’re basically getting two bags for one.


Away Packing Pro Bundle displayed on a white background
Photograph: Courtesy of Away

Away Packing Pro Bundle

Now $257, originally $343 at Away

Away is my go-to brand for luggage. I’ve used the Bigger Carry-On for years, and I firmly believe (after trying at least 10 other suitcases) that it holds more than any other carry-on out there. The bundle comes with a set of packing cubes – which are helpful with suitcase organization – so it’s a great starter pack for anyone just beginning their travel journey or those who tend to overpack.

Away’s early Black Friday sale means everything from the luggage brand is 25% off, but you’ll want to stay focused on the items with the best cost-per-use. For the everyday traveler, that is the Away Packing Pro Bundle. The set includes the Bigger Carry-On and the Insider Packing Cubes (set of four), two travel essentials that can be used together or separately. Bonus: you can opt to get both in the same color or mix and match.


Roam Check-In Expandable displayed on a white background
Photograph: Courtesy of Roam

Roam Check-In Expandable

Now $545, originally $725 at Roam

Make no mistake, high-quality checked suitcases are expensive, so I always recommend waiting to purchase one until it’s on sale. While the Roam Check-In Expandable suitcase is still on the pricier side, it’s the kind of suitcase you only need to buy once. Designed with an expandable feature (a 2in zipper expansion) and compression boards, it’ll comfortably hold between 10 and 13 outfits.


The best travel tech and gear deals

European Travel Plug Adapter Set
Photograph: Courtesy of Amazon

European Travel Plug Adapter Set

Now $12.66, originally $16.99 at Amazon

Is European travel in your future – or do you need a practical present for someone heading abroad? If the answer is yes to either, grab this European travel plug adapter set while it’s on sale (26% off). It comes with one type-C plug adapter (works for Americans traveling to Germany, Italy, France and Spain, among other countries) and one type-G mini adapter, which you’d use in the UK.

skip past newsletter promotion

JBL Tune 720BT Wireless Over-Ear Headphones
Photograph: Courtesy of Amazon

JBL Tune 720BT Wireless Over-Ear Headphones

Now $44.95, originally $89.95 at Amazon

As someone who has lost too many AirPods while in transit, I’m a full headphones convert. This pair from JBL is 50% off and highly rated by thousands of shoppers. If you’re not totally sold on larger headphones but want to give them a try (without splurging on a pricier version), this is the way to go. Not to mention, it comes in a handful of colorways, including purple and blue.


Apple AirTag, 4 Pack displayed on a white background
Photograph: Courtesy of Amazon

Apple AirTag 4-Pack

Now $64.99, originally at $99 at Amazon

There are certain things you can’t control at the airport – ahem, lost luggage – but you can arm yourself with tech that can track your belongings. Keeping an Apple AirTag in both my checked bag and carry-on gives me peace of mind when I’m making a tight connection or using a transfer service. If you haven’t invested in your own set of AirTags, now is the time.

If you went to Apple.com to purchase a four-pack of AirTags right now, you would pay full price. But on Amazon – and Walmart – the nifty tracking devices are under $65. While they’re definitely useful while traveling, AirTags can also do wonders in your daily life: attach one to your keys, throw one in your purse, or slip one in your wallet.


Travel Inspira Luggage Scale displayed on a white background
Photograph: Courtesy of Amazon

Travel Inspira Luggage Scale

Now $9.99, originally $12.99 at Amazon

Searching for a stocking stuffer? The Travel Inspira Luggage Scale is the type of item most people don’t realize is missing in their life until they experience the reassurance that comes with weighing a suitcase before arriving at the airport. You simply loop the weighing belt through your luggage handle and hold it up to get a read. Yes, the sale price saves you a few dollars, but you’ll also never pay an overweight luggage fee again.


The best travel clothing deals

Men’s Evolution EMB Oversize Hoodie
Photograph: Courtesy of REI

Men’s Evolution EMB Oversize Hoodie

Now $45.49, originally $65 at REI

REI’s holiday sale features thousands of items that would work well in any traveler’s uniform, but the men’s Evolution EMB Oversize Hoodie (now 30% off) is a true crowd-pleaser. It’s just the right amount of oversized without looking messy – the ribbing on the cuffs and hem gives it a clean look – and you can confidently and comfortably throw it on for any upcoming travel.


Women’s Pioneer Camp Packable Puffer
Photograph: Courtesy of Amazon

Women’s Pioneer Camp Packable Puffer

Now $47.59, originally $55.99 at Amazon

Choosing outerwear is always one of the hardest parts of packing. The general rule of thumb is to wear your jacket or coat while in transit (so you don’t have to fit it in your suitcase), but sometimes you want to bring an extra layer. In that case, I’ll go with something like the Pioneer Camp Women’s Packable Puffer – a lightweight, water-repellent style that can be packed down into its carrying bag. The timing of the sale is perfect, too; this is the type of piece you need for most winter travel.


Women’s Forever Fleece Relaxed Crew Sweatshirt
Photograph: Courtesy of Athleta

Women’s Forever Fleece Relaxed Crew Sweatshirt

Now $53.40, originally $89 at Athleta

I’m a firm believer that travel outfits should be both comfortable and presentable. I tend to stick with sweats and loungewear in solid, neutral colors, like the Forever Fleece Relaxed Crew Sweatshirt from Athleta. Not only will it never go out of style, but the dark navy also masks any inevitable travel stains.

The Athleta pre-Black Friday sale – 30% off everything when you download the Athleta app – is incredibly tempting for those who live in athleisure , but don’t go crazy just yet. Instead, only invest in the pieces that deserve a coveted spot in your suitcase. This cotton crewneck sweatshirt is machine-washable (a crucial trait for travel clothing) and comes in a ton of solid, neutral colors. Read: it will work for most, if not all, of the trips you have on your calendar.


Italic Amara Cashmere Wrap displayed on a white background
Photograph: Courtesy of Nordstrom

Italic Amara Cashmere Wrap

Now $167, originally $279 at Nordstrom

A cashmere wrap is one of my travel essentials. It’s more elevated than a blanket, but it keeps you just as warm on a chilly plane. It also doubles as a scarf – and any travel item that can play more than one role gets extra points in my book. On sale for more than $100 off, this 100% cashmere wrap from Italic checks all the boxes: chic, versatile and functional.


Airlines Will Shut Down Program That Sold Your Flights Records to Government

403 Media
www.404media.co
2025-11-18 18:43:23
The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about the airline industry's data selling practices....
Original Article

Airlines Reporting Corporation (ARC), a data broker owned by the U.S.’s major airlines, will shut down a program in which it sold access to hundreds of millions of flight records to the government and let agencies track peoples’ movements without a warrant, according to a letter from ARC shared with 404 Media.

ARC says it informed lawmakers and customers about the decision earlier this month. The move comes after intense pressure from lawmakers and 404 Media’s months-long reporting about ARC’s data selling practices. The news also comes after 404 Media reported on Tuesday that the IRS had searched the massive database of Americans flight data without a warrant.

“As part of ARC’s programmatic review of its commercial portfolio, we have previously determined that TIP is no longer aligned with ARC’s core goals of serving the travel industry,” the letter, written by ARC President and CEO Lauri Reishus, reads. TIP is the Travel Intelligence Program. As part of that, ARC sold access to a massive database of peoples’ flights, showing who travelled where, and when, and what credit card they used.

The ARC letter.

“All TIP customers, including the government agencies referenced in your letter, were notified on November 12, 2025, that TIP is sunsetting this year,” Reishus continued. Reishus was responding to a letter sent to airline executives earlier on Tuesday by Senator Ron Wyden, Congressman Andy Biggs, Chair of the Congressional Hispanic Caucus Adriano Espaillat, and Senator Cynthia Lummis. That letter revealed the IRS’s warrantless use of ARC’s data and urged the airlines to stop the ARC program. ARC says it notified Espaillat's office on November 14.

ARC is co-owned by United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada. The data broker acts as a bridge between airlines and travel agencies. Whenever someone books a flight through one of more than 12,800 travel agencies, such as Expedia, Kayak, or Priceline, ARC receives information about that booking. It then packages much of that data and sells it to the government, which can search it by name, credit card, and more. 404 Media has reported that ARC’s customers include the FBI, multiple components of the Department of Homeland Security, ATF, the SEC, TSA, and the State Department.

“Because ARC only has data on tickets booked through travel agencies, government agencies seeking information about Americans who book tickets directly with an airline must issue a subpoena or obtain a court order to obtain those records. But ARC’s data sales still enable government agencies to search through a database containing 50% of all tickets booked without seeking approval from a judge,” the letter from the lawmakers reads.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Gurman Says Apple Has No Plans to Update the Mac Pro

Daring Fireball
www.bloomberg.com
2025-11-18 18:27:29
Mark Gurman, in his (paywalled, alas) Power On column for Bloomberg over the weekend: The next major update didn’t arrive until 2023, when Apple finally transitioned the desktop to in-house chips with the M2 Ultra Mac Pro. Two years later, that model remains largely unchanged. And it’s been over...
Original Article

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

Need Help?

For inquiries related to this message please contact our support team and provide the reference ID below.

Block reference ID:8b2bbfc9-c4b4-11f0-a17b-13a1b0778cac

Get the most important global markets news at your fingertips with a Bloomberg.com subscription.

SUBSCRIBE NOW

NPR to get $36M in settlement to operate US public radio system

Hacker News
apnews.com
2025-11-18 18:19:10
Comments...
Original Article

WASHINGTON (AP) — National Public Radio will receive approximately $36 million in grant money to operate the nation’s public radio interconnection system under the terms of a court settlement with the federal government’s steward of funding for public broadcasting stations.

The settlement, announced late Monday, partially resolves a legal dispute in which NPR accused the Corporation for Public Broadcasting of bowing to pressure from President Donald Trump to cut off its funding.

On March 25, Trump said at a news conference that he would “love to” defund NPR and PBS because he believes they are biased in favor of Democrats.

NPR accused the CPB of violating its First Amendment free speech rights when it moved to cut off its access to grant money appropriated by Congress. NPR also claims Trump, a Republican, wants to punish it for the content of its journalism.

On April 2, the CPB’s board initially approved a three-year, roughly $36 million extension of a grant for NPR to operate the “interconnection” satellite system for public radio. NPR has been operating and managing the Public Radio Satellite System since 1985.

But the CPB reversed course under mounting pressure from the Trump administration, according to NPR . The agency redirected federal interconnection funds away from NPR to an entity that didn’t exist and wasn’t statutorily authorized to receive it, NPR says.

Stay up to date with the news and the best of AP by following our WhatsApp channel.

Follow on WhatsApp

CPB attorneys denied that the agency retaliated against NPR to appease Trump. They had argued that NPR’s claims are factually and legally meritless.

On May 1, Trump issued an executive order that called for federal agencies to stop funding for NPR and PBS. The settlement doesn’t end a lawsuit in which NPR seeks to block any implementation or enforcement of Trump’s executive order. U.S. District Judge Randolph Moss is scheduled to preside over another hearing for the case on Dec. 4.

The settlement says NPR and CPB agree that the executive order is unconstitutional and that CPB won’t enforce it unless a court orders it to do so.

Katherine Maher, NPR’s president and CEO, said the settlement is “a victory for editorial independence and a step toward upholding the First Amendment rights of NPR and the public media system.”

Patricia Harrison, the corporation’s CEO, said in a statement that the settlement marks “an important moment for public media.”

I am stepping down as the CEO of Mastodon

Hacker News
blog.joinmastodon.org
2025-11-18 18:13:30
Comments...
Original Article

After nearly 10 years, I am stepping down as the CEO of Mastodon and transferring my ownership of the trademark and other assets to the Mastodon non-profit. Over the course of my time at Mastodon, I have centered myself less and less in our outward communications, and to some degree, this is the culmination of that trend. Mastodon is bigger than me, and though the technology we develop on is itself decentralized—with heaps of alternative fediverse projects demonstrating that participation in this ecosystem is possible without our involvement—it benefits our community to ensure that the project itself which so many people have come to love and depend on remains true to its values. There are too many examples of founder egos sabotaging thriving communities, and while I’d like to think myself an exception, I understand why people would prefer better guardrails.

But it would be uncouth for me to pretend that there isn’t some self-interest involved. Being in charge of a social media project is, turns out, quite the stressful endeavour, and I don’t have the right personality for it. I think I need not elaborate that the passion so many feel for social media does not always manifest in healthy ways. You are to be compared with tech billionaires, with their immense wealth and layered support systems, but with none of the money or resources. It manifests in what people expect of you, and how people talk about you. I remember somebody jokingly suggesting that I challenge Elon Musk to a fight (this was during his and Mark Zuckerberg’s martial arts feud), and quietly thinking to myself, I am literally not paid enough for that. I remember also, some Spanish newspaper article that for some reason, concluded that I don’t dress as fashionably as Jeff Bezos, based on the extremely sparse number of pictures of myself I have shared on the web. Over an entire decade, these tiny things chip away at you slowly. Some things chip faster. I steer clear of showing vulnerability online, but there was a particularly bad interaction with a user last summer that made me realise that I need to take a step back and find a healthier relationship with the project, ultimately serving as the impetus to begin this restructuring process.

As for what the legacy of my run will be, I find hard to answer. For one, I think it is not up for me to judge. On the other hand, it is as much about what didn’t happen as it is about what did. I’ve always thought that one of the most important responsibilities I had was to say “no”. It is not a popular thing to do, nor is it a fun thing to do, but being pulled into too many different directions at once can spell disaster for any project. I’d like to think I avoided some trouble by being careful. But I’m also aware that my aversion to public appearances cost Mastodon some opportunities in publicity. Ultimately, while I cannot take sole credit for it, I am nevertheless most proud of how far we’ve made it over these last 10 years. From the most barebones project written out of my childhood bedroom, to one of the last remaining and thriving pieces of the original, community-centred internet.

I have so much passion for Mastodon and the fediverse. The fediverse is an island within an increasingly dystopian capitalist hellscape. And from my perspective, Mastodon is our best shot at bringing this vision of a better future to the masses. This is why I’m sticking around, albeit in a more advisory, and less public, role.

Lawsuit Challenges San Jose’s Warrantless ALPR Mass Surveillance

Electronic Frontier Foundation
www.eff.org
2025-11-18 18:11:13
EFF and the ACLU of Northern California Sue on Behalf of Local Nonprofits Contact: Josh Richman, EFF, jrichman@eff.org;  Carmen King, ACLU of Northern California, cking@aclunc.org SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warran...
Original Article

Contact: Josh Richman, EFF, jrichman@eff.org ;  Carmen King, ACLU of Northern California, cking@aclunc.org

SAN JOSE, Calif. – San Jose and its police department routinely violate the California Constitution by conducting warrantless searches of the stored records of millions of drivers’ private habits, movements, and associations, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union of Northern California (ACLU-NC) argue in a lawsuit filed Tuesday .

The lawsuit, filed in Santa Clara County Superior Court on behalf of the Services, Immigrant Rights and Education Network (SIREN) and the Council on American-Islamic Relations – California (CAIR-CA), challenges San Jose police officers’ practice of searching for location information collected by automated license plate readers (ALPRs) without first getting a warrant.

ALPRs are an invasive mass-surveillance technology: high-speed, computer-controlled cameras that automatically capture images of the license plates of every driver that passes by, without any suspicion that the driver has broken the law.

“A person who regularly drives through an area subject to ALPR surveillance can have their location information captured multiple times per day,” the lawsuit says. “This information can reveal travel patterns and provide an intimate window into a person’s life as they travel from home to work, drop off their children at school, or park at a house of worship, a doctor’s office, or a protest. It could also reveal whether a person crossed state lines to seek health care in California.”

The San Jose Police Department has blanketed the city’s roadways with nearly 500 ALPRs – indiscriminately collecting millions of records per month about people’s movements – and keeps this data for an entire year. Then the department permits its officers and other law enforcement officials from across the state to search this ALPR database to instantly reconstruct people’s locations over time – without first getting a warrant. This is an unchecked police power to scrutinize the movements of San Jose’s residents and visitors as they lawfully travel to work, to the doctor, or to a protest.

San Jose’s ALPR surveillance program is especially pervasive: Few California law enforcement agencies retain ALPR data for an entire year, and few have deployed nearly 500 cameras.

The lawsuit, which names the city, its Police Chief Paul Joseph, and its Mayor Matt Mahan as defendants, asks the court to stop the city and its police from searching ALPR data without first obtaining a warrant. Location information reflecting people’s physical movements, even in public spaces, is protected under the Fourth Amendment according to U.S. Supreme Court case law . The California Constitution is even more protective of location privacy, at both Article I, Section 13 (the ban on unreasonable searches) and Article I, Section 1 (the guarantee of privacy). “The SJPD’s widespread collection and searches of ALPR information poses serious threats to communities’ privacy and freedom of movement."

“This is not just about data or technology — it’s about power, accountability, and our right to move freely without being watched,” said CAIR-San Francisco Bay Area Executive Director Zahra Billoo. “For Muslim communities, and for anyone who has experienced profiling, the knowledge that police can track your every move without cause is chilling. San Jose’s mass surveillance program violates the California Constitution and undermines the privacy rights of every person who drives through the city. We’re going to court to make sure those protections still mean something."

"The right to privacy is one of the strongest protections that our immigrant communities have in the face of these acts of violence and terrorism from the federal government," said SIREN Executive Director Huy Tran. "This case does not raise the question of whether these cameras should be used. What we need to guard against is a surveillance state, particularly when we have seen other cities or counties violate laws that prohibit collaborating with ICE. We can protect the privacy rights of our residents with one simple rule: Access to the data should only happen once approved under a judicial warrant.”

For the complaint: https://www.eff.org/files/2025/11/18/siren_v._san_jose_-_filed_complaint.pdf

For more about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs

Show HN: Guts – convert Golang types to TypeScript

Hacker News
github.com
2025-11-18 17:55:55
Comments...
Original Article

Go Unto Ts (guts)

Go Reference

guts is a tool to convert golang types to typescript for enabling a consistent type definition across the frontend and backend. It is intended to be called and customized as a library, rather than as a command line executable.

See the simple example for a basic usage of the library.

type SimpleType[T comparable] struct {
	FieldString     string
	FieldInt        int
	FieldComparable T
	FieldTime       time.Time
}

Gets converted into

type Comparable = string | number | boolean;

// From main/main.go
interface SimpleType<T extends Comparable> {
    FieldString: string;
    FieldInt: number;
    FieldComparable: T;
    FieldTime: string;
}

How to use it

guts is a library, not a command line utility. This is to allow configuration with code, and also helps with package resolution.

See the simple example for a basic usage of the library. A larger example can be found in the Coder repository .

// Step 1: Create a new Golang parser
golang, _ := guts.NewGolangParser()

// Optional: Preserve comments from the golang source code
// This feature is still experimental and may not work in all cases
golang.PreserveComments()

// Step 2: Configure the parser
_ = golang.IncludeGenerate("github.com/coder/guts/example/simple")
// Step 3: Convert the Golang to the typescript AST
ts, _ := golang.ToTypescript()
// Step 4: Mutate the typescript AST
ts.ApplyMutations(
    config.ExportTypes, // add 'export' to all top level declarations
)
// Step 5: Serialize the typescript AST to a string
output, _ := ts.Serialize()
fmt.Println(output)

How it works

guts first parses a set of golang packages. The Go AST is traversed to find all the types defined in the packages.

These types are placed into a simple AST that directly maps to the typescript AST.

Using goja , these types are then serialized to typescript using the typescript compiler API.

Generator Opinions

The generator aims to do the bare minimum type conversion. An example of a common opinion, is to use types to represent enums. Without the mutation, the following is generated:

export enum EnumString {
    EnumBar = "bar",
    EnumBaz = "baz",
    EnumFoo = "foo",
    EnumQux = "qux"
}

Add the mutation:

ts.ApplyMutations(
	config.EnumAsTypes,
)
output, _ := ts.Serialize()

And the output is:

export type EnumString = "bar" | "baz" | "foo" | "qux";

Alternative solutions

The guts package was created to offer a more flexible, programmatic alternative to existing Go-to-TypeScript code generation tools out there.

The other solutions out there function as command-line utilities with yaml configurability. guts is a library, giving it a much more flexible and dynamic configuration that static generators can’t easily support.

Unlike many of its counterparts, guts leverages the official TypeScript compiler under the hood, ensuring that the generated TypeScript definitions are semantically correct, syntactically valid, and aligned with the latest language features.

Helpful notes

An incredible website to visualize the AST of typescript: https://ts-ast-viewer.com/

Ritchie Torres Leaned Right. Will That Cost Him His Seat in Congress?

hellgate
hellgatenyc.com
2025-11-18 17:54:37
How the Bronx congressmember has gone from progressive darling to progressive target....
Original Article

Five years ago, then-City Councilmember Ritchie Torres won a seat in Congress and was widely hailed as a progressive rising star, the first openly gay Afro-Latino member of Congress who rose from a childhood in public housing.

Now, the Bronx congressmember has gone from progressive darling to progressive target, largely over his stance on the Israel-Palestine conflict , not to mention his abandonment of several of his previous left-leaning positions, shifts that have alienated many former allies.

And this time around, Torres is facing primary challengers taking aim at his outspoken support for Israel . Legal Aid society attorney Dalourny Nemorin , a member of Democratic Socialists for America, has launched a campaign emphasizing affordability and protections for immigrants. Yet the opponent with the most name recognition at the moment is former assemblymember and failed mayoral candidate Michael Blake, who finished second to Torres in the crowded 2020 primary—and who has undergone a transformation of his own ahead of the rematch, from Israel supporter and AIPAC ally to fierce critic.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Ditch your (mut)ex, you deserve better

Lobsters
chrispenner.ca
2025-11-18 17:48:44
Comments...
Original Article

Having access to multiple parallel CPU cores isn't a new thing by any means, people have been programming in parallel for half a century now, but recent years we've found ourselves at an inflection point. Moore's law is dying, beefy single cores are no longer keeping up. Modern computers come with multiple CPU cores, so exploiting parallel compute is more important than ever. Given how long it's been an area of research we can naturally expect that effective tools have taken root and that synchronizing threads is trivial now right...?

Unfortunately this has not been my experience, and I'm willing to bet it hasn't been yours either. Managing shared state across threads is hard, and the most commonly used tools: mutexes and semaphores, simply haven't evolved much since their inception.

The words that follow will dig into the problems inherent to mutexes and synchronizing shared mutable state. Afterwards we'll look into other avenues which should prove more helpful.

Let's begin by crafting a simple software system which needs synchronization in the first place.

I'll present a commonly used example: the task of managing bank account balances correctly in spite of parallel transfer requests.

Of course real banks don't store all their account balances in RAM, so I'll hope that the reader can apply the concepts from this pedagogical example to a their own domain as necessary, it serves as a stand-in for any sufficiently complex system which requires ad-hoc synchronization of arbitrary data between multiple threads.

Here's some golang'ish pseudo-code (please don't try to actually compile it) for a simple bank account and the operations upon it. I'm focused on the synchronization problems here, so forgive me for skipping the double-entry accounting, input validation, and other real-world complexities.

struct Account {
  balance int,
}

// Deposit money into an account
func (a *Account) deposit(amount int) {
  a.balance += amount
}

// Withdraw money from an account, or return false if there are insufficient funds
func (a *Account) withdraw(amount int) bool {
  if (a.balance <= amount) {
    return false
  } else {
    balance -= amount
    return true
  }
}

Great! This defines our Account type and some methods for withdrawing and depositing money into such an account. Now let's add a function to transfer money between accounts:

func transfer(from *Account, to *Account, amount int) bool {
  if (from.withdraw(amount)) {
    to.deposit(amount)
    return true
  } else {
    return false
  }
}

Looks good, but now what happens when we start handling multiple requests concurrently?

struct TransferRequest {
  from *Account,
  to *Account,
  amount int,
}

func main() {
  // loop forever, accepting transfer requests and processing them in goroutines
  for {
    req := acceptTransferRequest()
    go transfer(req.from, req.to, req.amount)
  }
}

Things may work well in your tests if you're (un)lucky, and might even work well in production for a while, but sooner or later you're going to lose track of money and have some confused and angry customers.

Do you see why? This brings us to our first synchronization problem to solve, Data Races .

Data races

Most programming languages are imperative with mutable data structures [citation needed] , so passing pointers to multiple threads leads to shared mutable data , and shared mutable data necessarily causes data races .

A data race occurs any time two threads access the same memory location concurrently and non-deterministically when at least one of the accesses is a write. When a data race is present two runs of the same code with the same state may non-deterministically have a different result.

We're passing accounts by reference here, so multiple threads have access to modify the same account. With multiple transfer go-routines running on the same account, each could be paused by the scheduler at nearly any point during its execution. This means that even within this simple example we've already introduced a data race. Take another look at the withdraw function, I'll point it out:

// Withdraw money from an account, or return false if there are insufficient funds
func (a *Account) withdraw(amount int) bool {
  hasFunds := a.balance >= amount 
  // HERE! The scheduler could pause execution here and switch to another thread
  if (hasFunds) {
    balance -= amount
    return true
  } else {
    return false
  }
}

If two threads are withdrawing $100 from Alice's account, which only has $150 in it, it's possible that thread 1 checks the balance, sees there's enough money, then gets paused by the scheduler. Thread 2 runs, checks the balance, also sees there's enough money, then withdraws $100. When thread 1 later resumes execution after the check it withdraws its $100 too, Alice's account ends up with a negative balance of -$50, which is invalid even though we had validation!

This sort of concurrency error is particularly insidious because the original withdraw method is perfectly reasonable, idiomatic, and correct in a single-threaded program; however when we decide to add concurrency at a completely different place in the system we've introduced a bug deep within existing previously correct code. The idea that a perfectly normal evolution from a single-threaded to a multi-threaded program can introduce critical system-breaking bugs in completely unrelated code without so much as a warning is quite frankly completely unacceptable . As a craftsman I expect better from my tools.

Okay, but now that we've lost thousands if not millions of dollars, how do we fix this?

Traditional knowledge points us towards Mutexes .

Mutexes

Okay, we've encountered a problem with our shared mutable state, the traditional approach to solving these problems is to enforce exclusive access to the shared data in so-called "critical sections". Mutexes are so-named because they provide mut ual ex clusion, meaning only a single thread may access a given virtual resource at a time.

Here's how we can edit our program to fix the data race problems using a mutex:

struct Account {
  mutex Mutex,
  balance int,
}

func (a *Account) deposit(amount int) {
  a.mutex.lock()
  defer a.mutex.unlock()
  a.balance += amount
}

func (a *Account) withdraw(amount int) bool {
  a.mutex.lock()
  defer a.mutex.unlock()
  hasFunds := a.balance >= amount 
  if (hasFunds) {
    balance -= amount
    return true
  } else {
    return false
  }
}

Now every Account has a mutex on it, which acts as an exclusive lock.

It's much like a bathroom key in a busy restaurant. When you want to use the bathroom, you take the key, there's only one key available for each bathroom, so while you've got hold of it nobody else can use that bathroom. Now you're free to do your business, then you return the key to the hook on the wall for the next person.

Unlike a bathroom key however, mutexes are only conceptual locks, not real locks, and as such they operate on the honor system.

If the programmer forgets to lock the mutex the system won't stop them from accessing the data anyways, and even then there's no actual link between the data being locked and the lock itself, we need to trust the programmers to both understand and respect the agreement. A risky prospect on both counts.

In this case, we've addressed the data-race within withdraw and deposit by using mutexes, but we've still got a problem within the transfer function.

What happens if a thread is pre-empted between the calls to withdraw and deposit while running the transfer function? It's possible that money will been withdrawn from an account, but won't have yet been deposited in the other. This is an inconsistent state of the system, the money has temporarily disappeared, existing only in the operating memory of a thread, but not visible in any externally observable state. This can (and will) result in very strange behaviour.

As a concrete way to observe the strangeness let's write a report function which prints out all account balances:

func report() {
    for _, account := range accounts {
        account.mutex.lock()
        fmt.Println(account.balance)
        account.mutex.unlock()
    }
}

If we run a report while transfers are ongoing we'll likely see that the count of the total amount of money that exists within the system is incorrect, and changes from report to report, which should be impossible in a closed system like this! This inconsistency occurs even if we obtain the locks for each individual account before checking the balance.

In larger systems this sort of inconsistency problem can cause flaws in even simple logic, since choices may be made against inconsistent system states. The root of this issue is that the transfer function requires holding multiple independent locks, but they're not grouped in any way into an atomic operation.

Composing Critical Sections

We need some way to make the entire transfer operation atomic, at least from the perspective of other threads who are respecting our mutexes.

Okay, well no problem, we can just lock both accounts, right?

func transfer(from *Account, to *Account, amount int) bool {
  from.mutex.lock()
  to.mutex.lock()
  defer from.mutex.unlock()
  defer to.mutex.unlock()

  if (from.withdraw(amount)) {
    to.deposit(amount)
    return true
  } else {
    return false
  }
}

I'm sure some readers have already seen a problem here, but have you seen two problems here?

The first is obvious when you point it out, remember that withdraw and deposit also lock the mutex on the account, so we're trying to acquire the same lock twice in the same thread .

transfer won't even begin to run in this state, it will block forever inside withdraw when it tries to lock the from.mutex for the second time.

Some systems, like re-entrant locks and Java's synchronized keyword do some additional book-keeping which allow a single thread to lock the same mutex multiple times, so using a re-entrant lock here would solve this particular problem. However other systems, like golang, avoid providing re-entrant locks on a matter of principle .

So what can we do? I suppose we'll need to pull the locks out of withdraw and deposit so we can lock them in transfer instead.

func (a *Account) deposit(amount int) {
  a.balance += amount
}

func (a *Account) withdraw(amount int) bool {
  hasFunds := a.balance >= amount 
  if (hasFunds) {
    balance -= amount
    return true
  } else {
    return false
  }
}

func transfer(from *Account, to *Account, amount int) bool {
  from.mutex.lock()
  to.mutex.lock()
  defer from.mutex.unlock()
  defer to.mutex.unlock()

  if (from.withdraw(amount)) {
    to.deposit(amount)
    return true
  } else {
    return false
  }
}

Ugh, a correct transfer function should conceptually just be the composition of our well encapsulated withdraw and a deposit functions, but defining it correctly has forced us to remove the locking from both withdraw and deposit , making both of them less safe to use. It has placed the burden of locking on the caller (without any system-maintained guarantees), and even worse, we now need to remember to go and add locking around every existing withdraw and deposit call in the entire codebase. Even if we try to encapsulate everything within the module and only export "safe" operations we've caused duplication since we now need synchronized and unsynchronized versions of our withdraw and deposit operations. And we'd still need to expose the mutexes if we want to allow callers to synchronize operations with other non- Account data.

What I'm getting at is that mutexes don't compose ! They don't allow us to chain multiple critical sections into a single atomic unit, they force us to break encapsulation and thrust the implementation details of mutexes and locking onto the caller who shouldn't need to know the details about which invariants must be maintained deep within the implementation. Adding or removing access to synchronized variables within an operation will also necessitate adding or removing locking to every call site , and those call sites may be in a completely different application or library. This is an absolute mess.

All that sounds pretty bad, but would you believe those aren't the only problems here? It's not just composition that's broken here though, in fixing transfer to make it an atomic operation we've managed to introduce a new, extra-well-hidden deadlock bug.

Deadlocks/Livelocks

Recall that in our main loop we're accepting arbitrary transfer requests and spawning them off in goroutines. What happens in our system if we have two transfer requests, Alice is trying to Venmo Bob $25 for the beanbag chair she just bought off him, meanwhile Bob remembers he needs to Venmo Alice the $130 he owes her for Weird Al concert tickets.

If by sheer coincidence they both submit their requests at the same time, we have two transfer calls:

  • transfer(aliceAccount, bobAccount, 25)
  • transfer(bobAccount, aliceAccount, 130)

Each of these calls will attempt to lock their from account and then their to account. If Alice and Bob get very unlucky, the system will start the first transfer and lock Alice's account, then get paused by the scheduler. When the second transfer call comes in, it first locks Bob's account, then tries to lock Alice's account, but can't because it's already locked by the first transfer call.

This is a classic deadlock situation. Both threads will be stuck forever, and worse, both Alice and Bob's accounts will be locked until the system restarts.

This is a pretty disastrous consequence for a problem which is relatively hard to spot even in this trivially simple example. In a real system with dozens or hundreds of methods being parallelized in a combinatorial explosion of ways it's very difficult to reason about this, and can be a lot of work to ensure locks are obtained in a safe and consistent order.

Golang gets some credit here in that it does provide some runtime tools for detecting both dead-locks and data-races, which is great, but these detections only help if your tests encounter the problem; they don't prevent the problem from happening in the first place. Most languages aren't so helpful, these issues can be very difficult to track down in production systems.

Assessing the damage

What a dumpster fire we've gotten ourselves into...

While it may be no accident that the example I've engineered happens to hit all of the worst bugs at once, in my experience, given enough time and complexity these sorts of problems will crop up any system eventually. Solving them with mutexes is especially dangerous because it will seem to be an effective solution at first. Mutexes work fine in small localized use-cases, thus tempting us to use them, but as the system grows organically we stretch them too far and they fail catastrophically as the complexity of the system scales up, causing all sorts of hacky workarounds. I'm of the opinion that crossing your fingers and hoping for the best is not an adequate software-engineering strategy.

So, we've seen that architecting a correct software system using mutexes is possible , but very difficult . Every attempt we've made to fix one problem has spawned a couple more.

Here's a summary of the problems we've encountered:

  • Data races causing non-determinism and logic bugs
  • Lack of atomicity causing inconsistent system states
  • Lack of composition causing
    • Broken encapsulation
    • Code duplication
    • Cognitive overload on callers
  • Deadlocks/livelocks causing system-wide freezes
  • New features may require changes to every call-site

In my opinion, we've tried to stretch mutexes beyond their limits, both in this blog post and in the industry as a whole. Mutexes work great in small, well-defined scopes where you're locking a single resource which is only ever accessed in a handful of functions in the same module, but they're too hard to wrangle in larger complex systems with many interacting components maintained by dozens or hundreds of developers. We need to evolve our tools and come up with more reliable solutions.

Cleaning up the Chaos

Thankfully, despite an over-reliance on mutexes, we as an industry have still learned a thing or two since the 1960s. Particularly I think that enforcing immutability by default goes a long way here. For many programmers this is a paradigm shift from what they're used to, which usually causes some uneasiness. Seatbelts, too, were often scorned in their early years for their restrictive nature, but over time it has become the prevailing opinion that the mild inconvenience is more than worth the provided safety.

More and more languages (Haskell, Clojure, Erlang, Gleam, Elixir, Roc, Elm, Unison, ...) are realizing this and are adopting this as core design principle. Obviously not every programmer can switch to an immutable-first language over night, but I think it would behoove most programmers to strongly consider an immutable language if parallelism is a large part of their project's workload.

Using immutable data structures immediately prevents data-races, full-stop. So stick with immutable data everywhere you can, but in a world of immutability we'll still need some way to synchronize parallel processes and for that most of these languages do still provide some form of mutable reference. It's never the default, and there's typically some additional ceremony or tracking in the type system which acts as an immediate sign-post that shared-mutable state is involved; here there be dragons.

Even better than mutable references, decades of research and industrial research have provided us with a swath battle-tested high-level concurrency patterns which are built on top of lower-level synchronization primitives like mutexes or mutable references, typically exposing much safer interfaces to the programmer.

Concurrency Patterns

Actor systems and Communicating Sequential Processes (CSP) are some of the most common concurrency orchestration patterns. Each of these operate by defining independent sub-programs which have their own isolated states which only they can access. Each actor or process receives messages from other units and can respond to them in turn. Each of these deserves a talk or blog post of their own so I won't dive too deeply into them here, but please look into them deeper if this is the first you're hearing of them.

These approaches work great for task parallelism , where there are independent processes to run, and where your parallelism needs are bounded by the number of tasks you'd like to run. As an example, I used an actor-based system when building Unison's code-syncing protocol. There was one actor responsible for loading and sending requests for code, one for receiving and unpacking code, and one for validating the hashes of received code. This system required exactly 3 workers to co-operate regardless of how many things I was syncing. Actor and CSP systems are great choices when the number of workers/tasks we need to co-ordinate is statically known, i.e. a fixed number of workers, or a pre-defined map-reduce pipeline. These patterns can scale well to many cores since each actor or process can run independently on its own core without worrying about synchronizing access to shared mutable state, and as a result can often scale to multiple machines as well.

However, there are also problems where the parallelism is dynamic or ad-hoc, meaning there could be any number of runtime-spawned concurrent actors that must co-ordinate well with each other. In those cases these systems tend to break down. I've seen consultants describe complex patterns for dynamically introducing actors, one-actor-per-resource systems, tree-based actor resource hierarchies and other complex ideas but in my opinion these systems quickly outgrow the ability of any one developer to understand and debug.

So how then do we model a system like the bank account example? Even if we were to limit the system to a fixed number of transfer-workers they'd still be concurrently accessing the same data (the bank accounts) and need some way to express atomic transfers between them, which isn't easily accomplished with actors or CSP.

What's a guy to do?

A new (old) synchronization primitive

In the vast majority of cases using a streaming system, actors or CSP is going to be most effective and understandable. However in cases where we must synchronize individual chunks of data across many workers, and require operations to affect multiple chunks of data atomically, there's only one name in town that gets the job done right.

Software Transactional Memory (STM) is a criminally under-utilized synchronization tool which solves all of the problems we've encountered so far while providing more safety, better compositionality, and cleaner abstractions. Did I mention they prevent most deadlocks and livelocks too?

To understand how STM works, think of database transactions; in a database transaction isolation provides you with a consistent view of data in spite of concurrent access. Each transaction sees an isolated view of the data, untampered by other reads and writes. After making all your reads and writes you commit the transaction. Upon commit, the transaction either succeeds completely and applies ALL the changes you made to the data snapshot, or it may result in a conflict . In cases of a conflict the transaction fails and rolls back all your changes as though nothing happened, then it can retry on the new data snapshot.

STM works in much the same way, but instead of the rows and columns in a database, transactions operate on normal in-memory data structures and variables.

To explore this technique let's convert our bank account example into Haskell so we can use STM instead of mutexes.

data Account = Account {
  -- Data that needs synchronization is stored in a 
  -- Transactional Variable, a.k.a. TVar
  balanceVar :: TVar Int
}

-- Deposit money into an account.
deposit :: Account -> Int -> STM ()
deposit Account{balanceVar} amount = do
  -- We interact with the data using TVar operations which
  -- build up an STM transaction.
  modifyTVar balanceVar (\existing -> existing + amount)

-- Withdraw money from an account
-- Everything within the `do` block
-- is part of the same transaction.
-- This guarantees a consistent view of the TVars we 
-- access and mutate.
withdraw :: Account -> Int -> STM Bool
withdraw Account{balanceVar} amount = do
  existing <- readTVar balanceVar
  if existing <= amount
    then (return False)
    else do
      writeTVar balanceVar (existing - amount)
      return True

-- Transfer money between two accounts atomically
transfer :: Account -> Account -> Int -> STM Bool
transfer from to amount = do
  -- These two individual transactions seamlessly
  -- compose into one larger transaction, guaranteeing
  -- consistency without any need to change the individual
  -- operations.
  withdrawalSuccessful <- withdraw from amount
  if successful
    then do
      deposit to amount
      return True
    else 
      return False

Let's do another lap over all the problems we had with mutexes to see how this new approach fares.

Data Races

Data races are a problem which I believe are best solved at the language level itself. As mentioned earlier, using immutable data by default simply prevents data races from existing in the first place. Since data in Haskell is all immutable by default, pre-emption can occur at any point in normal code and we know we won't get a data race.

When we need mutable data , it's made explicit by wrapping that data in TVar s. The language further protects us by only allowing us to mutate these variables within transactions, which we compose into operations which are guaranteed a consistent uncorrupted view of the data.

Let's convert withdraw to use STM and our balaceVar TVar.

-- Withdraw money from an account
withdraw :: Account -> Int -> STM Bool
withdraw Account{balanceVar} amount = do
  existing <- readTVar balanceVar
  if existing <= amount
    then (return False)
    else do
      -- No data races here!
      writeTVar balanceVar (existing - amount)
      return True

We can see that the code we wrote looks very much like the original unsynchronized golang version, but while using STM it's perfectly safe from data races! Even if it the thread is pre-empted in the middle of the operation, the transaction-state is invisible to other threads until the transaction commits.

Deadlock/Livelock

STM is an optimistic concurrency system . This means that threads never block waiting for locks . Instead, each concurrent operation proceeds, possibly in parallel, on their own independent transaction log. Each transaction tracks which pieces of data it has accessed or mutated and if at commit time it is detected that some other transaction has been committed and altered data which this transaction also accessed, then the latter transaction is rolled back and is simply retried.

This arrangement is fundamentally different from a lock-based exclusive access system. In STM, you don't deal with locks at all, you simply read and write data within a transaction as necessary. Our transfer function reads and writes two different TVar s, but since we're not obtaining exclusive locks to these vars, we don't need to worry about deadlock at all . If two threads happen to be running a transfer on the same TVars at the same time, whichever commits first will atomically apply its updates to both accounts and the other transaction will detect this update at commit-time and will retry against the new balances.

This can cause some contention and possibly even starvation of any single transaction if many threads are trying to update the same data at the same time, but since a conflict can only occur if some other transaction has been committed, it does still have the guarantee that the system will make progress on at least some work. In Haskell, STM transactions must be pure code, and can't do IO, so most transactions are relatively short-running and should proceed eventually. This seems like a downside, but in practice it only surfaces as a rare annoyance and can usually be worked around without too much trouble.

Composition

It may not be immediately obvious from the types if you're not used to Haskell code, but all three of withdraw , deposit , and transfer are all functions which return their results wrapped in the STM monad, which is essentially a sequence of operations which we can ask to execute in a transaction using the atomically function.

We can call out to any arbitrary methods which return something wrapped in STM and it will automatically be joined in as part of the current transaction.

Unlike our mutex setup, callers don't need to manually handle locks when calling withdraw and deposit , nor do we need to expose special synchronized versions of these methods for things to be safe. We can define them exactly once and use that one definition either on its own or within a more complex operation like transfer without any additional work. The abstraction is leak-proof, the caller doesn't need to know which synchronized data is accessed or lock or unlock any mutexes. It simply runs the transaction and the STM system happily handles the rest for you.

Here's what it looks like to actually run our STM transactions, which we do using the atomically function:

main :: IO ()
main = do
  forever $ do
    req <- acceptTransferRequest
    -- Run each transfer on its own green-thread, in an atomic transaction.
    forkIO (atomically (transfer req.from req.to req.amount)

If we'd like to compile a report of all account balances as we did previously, we can do that too. This time however we won't get a potentially inconsistent snapshot of the system by accident, instead the type-system forces us to make an explicit choice of which behaviour we'd like.

We can either:

  • Access and print each account balance individually as separate transaction which means accounts may be edited in-between transactions, leading to an inconsistent report like we saw earlier.
  • Or, we can wrap the entire report into a single transaction , reading all account balances in a single transaction. This will provide a consistent snapshot of the system, but due to the optimistic transaction system, the entire transaction will be retried if any individual transfers commit and edit accounts while we're collecting the report. It's possible that if transfers are happening very frequently, the report may be retried many times before it can complete.

This is a legitimate tradeoff that the developer of the system should be forced to consider.

Here's what those two different implementations look like:

-- Inconsistent report, may see money disappear/appear
reportInconsistent :: [Account] -> IO ()
reportInconsistent accounts = do
  for_ accounts $ \Account{balanceVar} -> do
    balance <- atomically (readTVar balanceVar)
    print balance

-- Consistent report, may be retried indefinitely 
-- if transfers are happening too frequently
reportConsistent :: [Account] -> IO ()
reportConsistent accounts = do
  balances <- atomically do 
    for accounts $ \Account{balanceVar} -> do
      readTVar balanceVar
  -- Now that we've got a snapshot we can print it out
  for_ balances print

Smart Retries

One last benefit of STM which we haven't yet discussed is that it supports intelligent transaction retries based on conditions of the synchronized data itself. For instance, if we have a task to withdraw $100 from Alice's account but it only has $50 in it, the mutex-based system has no choice to but fail the withdrawal entirely and return the failure up the stack. We can wrap that call with code to try again later, but how will we know when it's reasonable to try again? This would once again require the caller to understand the implementation details , and which locks the method is accessing.

STM, instead, supports failure and retrying as a first-class concept. At any point in an STM transaction you can simply call retry , this will record every TVar that the transaction has accessed up until that point, then will abort the current transaction and will sleep until any of those TVar s has been modified by some other successful transaction. This avoids busy-waiting, and allows writing some very simple and elegant code.

For example, here's a new version of our withdraw function which instead of returning a failure will simply block the current thread until sufficient funds are available, retrying only when the balance of that account is changed by some other transaction's success.

-- Withdraw money from an account, blocking until sufficient funds are available
withdraw :: Account -> Int -> STM ()
withdraw Account{balanceVar} amount = do
  existing <- readTVar balanceVar
  if existing <= amount
    then retry
    else do
      writeTVar balanceVar (existing - amount)

You typically wouldn't use this to wait for an event which may take days or weeks to occur like in this example; but it's a very elegant and efficient solution for waiting on a channel, waiting for a future to produce a result, or waiting on any other short-term condition to be met.

Here's an example utility for zipping together two STM queues. The transaction will only succeed and produce a result when a value is available on both queues, and if that's not the case, it will only bother retrying when one of the queues is modified since readTQueue calls retry internally if the queue is empty.

zipQueues :: TQueue a -> TQueue b -> STM (a, b)
zipQueues q1 q2 = do
  val1 <- readTQueue q1
  val2 <- readTQueue q2
  return (val1, val2)

Nifty!

Conclusion

We've covered a lot in this post, if there's only one thing you can take away from it, I hope that you've taken the time to consider whether mutexes with shared mutable state are providing you with utility which outweighs their inherent costs and complexities. Unless you need peak performance, you may want to think twice about using such dangerous tools. Instead, consider using a concurrency pattern like actors, CSP, streaming, or map-reduce if it matches your use-case.

If you need something which provides greater flexibility or lower-level control, Software Transactional Memory (STM) is a fantastic choice if it's available in your language of choice, though note that not all languages support it, or if they do, may not be able to provide sufficient safety guarantees due to mutable variables and data structures.

If you're starting a new project for which concurrency or parallelism is a first-class concern, consider trying out a language that supports STM properly, I can recommend Unison or Haskell as great starting points.

Hopefully you learned something 🤞! If you did, please consider joining my Patreon to keep up with my projects, or check out my book: It teaches the principles of using optics in Haskell and other functional programming languages and takes you all the way from an beginner to wizard in all types of optics! You can get it here . Every sale helps me justify more time writing blog posts like this one and helps me to continue writing educational functional programming content. Cheers!

Google CEO: If an AI bubble pops, no one is getting out clean

Hacker News
arstechnica.com
2025-11-18 17:47:41
Comments...
Original Article

Market concerns and Google’s position

Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT , as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.

Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.

Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”

In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”

Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.

“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.

WhatsApp Census

Lobsters
github.com
2025-11-18 17:38:04
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://github.com/sbaresearch/whatsapp-census/blob/main/Hey_there_You_are_using_WhatsApp.pdf.

Cloudflare Suffered Hours Long Outage (Which Brought Down Daring Fireball, Among Thousands of Other Sites)

Daring Fireball
www.theguardian.com
2025-11-18 17:30:19
Cloudflare suffered an hours-long global outage, starting around 12pm UTC / 7am ET, which brought down an enormous chunk of the Internet. This included, humorously, Down Detector. It also included (not so humorously to me) Daring Fireball, which has been routed through Cloudflare since 2018. My apol...
Original Article

'Incident now resolved' says Cloudflare

The firm has just issued an update saying it believes the incident over.

A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal.

I’ve just quickly tested several key sites which are loading again.

Key events

Please turn on JavaScript to use this feature

A recap of the Cloudflare outages

  • Key sites around the world went down, some for a few hours, after a widely relied-upon Internet infrastructure company suffered an unknown issue

  • The outages took place in the early hours of US morning and during UK business hours

  • It affected users of everything from Spotify, ChatGPT, X, Zoom, Microsoft Teams and Canva to retail websites of Visa, Vodafone and Vinted and UK grocery chains Asda and M&S.

  • Cloudflare said at 11:48 GMT that it was investigating and working on a fix. It declared the incident resolved nearly three hours later.

  • It’s still unknown what exactly the problem was but there had been scheduled maintenance for Tuesday.

  • The US company provides protection and defensive services to millions of sites, and claims to handle a fifth of web traffic.

  • Cloudflare said it had to temporarily disable some services for UK users in its attempts to fix the issue today.

  • Its latest update at 14:57 GMT read: “Some customers may be still experiencing issues logging into or using the Cloudflare dashboard. We are working on a fix to resolve this, and continuing to monitor for any further issues.

We’ll leave the blog there for now. For a full recount of the issue, read our UK technology editor’s report:

'Incident now resolved' says Cloudflare

The firm has just issued an update saying it believes the incident over.

A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal.

I’ve just quickly tested several key sites which are loading again.

Maintenance had been scheduled today

The company’s engineers had been due to carry out maintenance today at data centres in Tahiti, Los Angeles, Atlanta and Santiago, according to the company’s updates.

It’s not clear yet whether their activities were related to the outage.

Unlikely to be a cyber- attack, says expert

Robert Booth

Robert Booth

Cloudflare was described as “the biggest company you’ve never heard of” by Prof Alan Woodward of the Surrey Centre for Cyber Security.

The company says it provides services to “protect your websites, apps, APIs, and AI workloads while accelerating performance”.

Woodward has described it as a “gatekeeper” and says its roles included monitoring traffic to sites to defend them against distributed denial of service attacks when malicious actors try to overwhelm sites with requests. It also checks users are human.

While the cause remains unclear, Woodward said it was unlikely to be a cyber-attack as a service so large was unlikely to have a single point of failure.

The problems at Cloudflare come less than a month after an outage of Amazon Web Services brought down thousands of sites.

“We’re seeing how few of these companies there are in the infrastructure of the internet, so that when one of them fails it becomes really obvious quickly,” Woodward said.

Cloudflare slowly recovering but some key platforms still down

Cloudflare OutageFILE - Lava lamps are seen through a lobby window at the headquarters of Cloudflare in San Francisco, Wednesday, Aug. 31, 2022. (AP Photo/Eric Risberg, File)
Cloudflare headquarters in San Francisco Photograph: Eric Risberg/AP

Cloudflare, whose network handles around a fifth of web traffic, has deployed a fix and is slowly recovering service. But there are still key platforms down, such as ChatGPT.

“We are continuing working on restoring service for application services customers,” says the firm in its latest update at 13:58 GMT.

Less than an hour ago, the firm said it had made changes that meant error levels for its Cloudflare and Warp encryption service had returned to pre-incident rates.

“We have re-enabled WARP access in London,” Cloudflare said.

Several major sites and platforms have been affected by the issue including X, Spotify and ChatGPT.

Facebook, Amazon Web Services, and the sites for Ikea, Uber, Visa and Vodafone have also suffered outages, according to Downdetector.

Cloudflare outage causes error messages across the internet

Robert Booth

Robert Booth

A key piece of the internet’s usually hidden infrastructure suffered a global outage on Tuesday, causing error messages to flash up across websites.

Cloudflare, a US company whose services include defending millions of websites against malicious attacks, experienced an unidentified problem on Tuesday, which meant internet users could not access some of its customers’ websites.

Some site owners could not access their performance dashboards. Sites including X and OpenAI suffered increased outages at the same time as Cloudflare’s problems, according to Downdetector .

The outage is ongoing but as of 12.21pm GMT, the company said: “We are seeing services recover, but customers may continue to observe higher-than-normal error rates as we continue remediation efforts.”

A further message said: “Update: we are continuing to investigate this issue.”

A spokesperson for Cloudflare said: “We saw a spike in unusual traffic to one of Cloudflare’s services beginning at 11.20am. That caused some traffic passing through Cloudflare’s network to experience errors. While most traffic for most services continued to flow as normal, there were elevated errors across multiple Cloudflare services.

“We do not yet know the cause of the spike in unusual traffic. We are all hands on deck to make sure all traffic is served without errors. After that, we will turn our attention to investigating the cause of the unusual spike in traffic.”

Microsoft is bringing native Sysmon support to Windows 11, Server 2025

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 17:25:18
Microsoft announced today that it is integrating Sysmon natively into Windows 11 and Windows Server 2025 next year, making it unnecessary to deploy the standalone Sysinternals tools. [...]...
Original Article

Windows utility

Microsoft announced today that it is integrating Sysmon natively into Windows 11 and Windows Server 2025 next year, making it unnecessary to deploy the standalone Sysinternals tools.

"Next year, Windows updates for Windows 11 and Windows Server 2025 will bring Sysmon functionality natively to Windows," reads an announcement by Sysinternals creator Mark Russinovich.

"Sysmon functionality allows you to use custom configuration files to filter captured events. These events are written to the Windows event log. enabling a wide range of use cases including by security applications."

Wiz

Sysmon (or System Monitor) is a free Microsoft Sysinternals tool that can be configured to monitor for and block malicious/suspicious activity and log events to the Windows Event Log.

By default, Sysmon monitors basic events, such as process creation and termination. However, it is possible to create advanced configuration files that let you monitor and perform more advanced behavior, such as monitoring process tampering , DNS queries , executable file creation , Windows clipboard changes , and auto-backing up deleted files .

Sysmon is a very popular tool for threat hunting and diagnosing persistent issues in Windows, but it normally needs to be installed individually on devices, making it harder to manage and reducing coverage in large IT environments.

With Sysmon now natively supported in Windows, users and admins can install it via Windows 11's "Optional features" settings dialog and receive new software updates directly through Windows Update, making deployment and management much easier.

Microsoft says the built-in capabilities will retain Sysmon's standard feature set, including support for custom configuration files and advanced event filtering.

Once installed, admins can enable it via the Command Prompt using the following command for basic monitoring:

sysmon -i

For more advanced monitoring using a custom configuration file, users can deploy it using the following command:

sysmon -i <name_of_config_file>

For example, if you wanted to log when new executables are created under the C:\ProgramData\ and C:\Users\ folders, you can use the following configuration file:

<Sysmon schemaversion="4.90">
  <!-- Capture all hashes -->
  <HashAlgorithms>MD5,SHA256</HashAlgorithms>
  <EventFiltering>
    <!-- Log executable file creations -->
    <FileExecutableDetected onmatch="include">
    <TargetFilename condition="begin with">C:\ProgramData\</TargetFilename>
    <TargetFilename condition="begin with">C:\Users\</TargetFilename>
    </FileExecutableDetected>
  </EventFiltering>
</Sysmon>

Now, when a new executable is created in one of those directories, Windows logs it to the Event Logs, as shown below.

Sysmon event 29 -File Executable Detected
Sysmon event 29 -File Executable Detected
Source: BleepingComputer

Other popular events logged by Sysmon include:

  • Event ID 1 – Process Creation: Useful for detecting suspicious command-line activity.
  • Event ID 3 – Network Connection: Logs outbound connections for anomaly detection or C2 activity.
  • Event ID 8 – Process Access: Can expose attempts to access LSASS for credential dumping.
  • Event ID 11 – File Creation: Tracks script file generation often used in malware staging.
  • Event ID 25 – Process Tampering: Helps identify process hollowing and other evasion techniques.
  • Event IDs 20 & 21 – WMI Events: Captures persistent activity through WMI consumers and filters.

Microsoft also confirmed that it will finally release comprehensive documentation on using Sysmon next year, as well as bring new enterprise management features and AI-powered threat detection capabilities.

For now, if you wish to test or deploy Sysmon in your environment, you can do so using the individual tool on the Sysinternals site and by reviewing SwiftOnSecurity's example Sysmon configuration .

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Microsoft to integrate Sysmon directly into Windows 11, Server 2025

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 17:25:18
Microsoft announced today that it will integrate Sysmon natively into Windows 11 and Windows Server 2025 next year, making it unnecessary to deploy the standalone Sysinternals tools. [...]...
Original Article

Windows utility

Microsoft announced today that it will integrate Sysmon natively into Windows 11 and Windows Server 2025 next year, making it unnecessary to deploy the standalone Sysinternals tools.

"Next year, Windows updates for Windows 11 and Windows Server 2025 will bring Sysmon functionality natively to Windows," reads an announcement by Sysinternals creator Mark Russinovich.

"Sysmon functionality allows you to use custom configuration files to filter captured events. These events are written to the Windows event log. enabling a wide range of use cases including by security applications."

Wiz

Sysmon (or System Monitor) is a free Microsoft Sysinternals tool that can be configured to monitor for and block malicious/suspicious activity and log events to the Windows Event Log.

By default, Sysmon monitors basic events, such as process creation and termination. However, it is possible to create advanced configuration files that let you monitor and perform more advanced behavior, such as monitoring process tampering , DNS queries , executable file creation , Windows clipboard changes , and auto-backing up deleted files .

Sysmon is a very popular tool for threat hunting and diagnosing persistent issues in Windows, but it normally needs to be installed individually on devices, making it harder to manage and reducing coverage in large IT environments.

With Sysmon now natively supported in Windows, users and admins can install it via Windows 11's "Optional features" settings dialog and receive new software updates directly through Windows Update, making deployment and management much easier.

Microsoft says the built-in capabilities will retain Sysmon's standard feature set, including support for custom configuration files and advanced event filtering.

Once installed, admins can enable it via the Command Prompt using the following command for basic monitoring:

sysmon -i

For more advanced monitoring using a custom configuration file, users can deploy it using the following command:

sysmon -i <name_of_config_file>

For example, if you wanted to log when new executables are created under the C:\ProgramData\ and C:\Users\ folders, you can use the following configuration file:

<Sysmon schemaversion="4.90">
  <!-- Capture all hashes -->
  <HashAlgorithms>MD5,SHA256</HashAlgorithms>
  <EventFiltering>
    <!-- Log executable file creations -->
    <FileExecutableDetected onmatch="include">
    <TargetFilename condition="begin with">C:\ProgramData\</TargetFilename>
    <TargetFilename condition="begin with">C:\Users\</TargetFilename>
    </FileExecutableDetected>
  </EventFiltering>
</Sysmon>

Now, when a new executable is created in one of those directories, Windows logs it to the Event Logs, as shown below.

Sysmon event 29 -File Executable Detected
Sysmon event 29 -File Executable Detected
Source: BleepingComputer

Other popular events logged by Sysmon include:

  • Event ID 1 – Process Creation: Useful for detecting suspicious command-line activity.
  • Event ID 3 – Network Connection: Logs outbound connections for anomaly detection or C2 activity.
  • Event ID 8 – Process Access: Can expose attempts to access LSASS for credential dumping.
  • Event ID 11 – File Creation: Tracks script file generation often used in malware staging.
  • Event ID 25 – Process Tampering: Helps identify process hollowing and other evasion techniques.
  • Event IDs 20 & 21 – WMI Events: Captures persistent activity through WMI consumers and filters.

Microsoft also confirmed that it will finally release comprehensive documentation on using Sysmon next year, as well as bring new enterprise management features and AI-powered threat detection capabilities.

For now, if you wish to test or deploy Sysmon in your environment, you can do so using the individual tool on the Sysinternals site and by reviewing SwiftOnSecurity's example Sysmon configuration .

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Pebble, Rebble, and a Path Forward

Hacker News
ericmigi.com
2025-11-18 17:24:27
Comments...
Original Article

I believe the Pebble community, Core Devices, Rebble and I all want the same thing. We love our Pebbles and want them to keep working long into the future. We love the community that has sprung up around Pebble, and how it’s persevered - next year will be the 14th anniversary of the original Kickstarter campaign!

But I have to respond to claims made by Rebble posted on their blog yesterday. I will link to their post so you can read their side of the story, and I’ve asked them to link back to this blog post from theirs.

Look - I’m the first person to call myself out when I fail. I wrote a detailed blog post about Success and Failure at Pebble and often write in detail about learning from my mistakes. But in this specific case, you’ll find that I’ve done my utmost to respect the Pebble legacy and community. Rebble is misleading the community with false accusations.


For those just passing through, here’s the TLDR:

Core Devices is a small company I started in 2025 to relaunch Pebble and build new Pebble smartwatches . Rebble is a non-profit organization that has supported the Pebble community since 2017. Rebble has done a ton of great work over the years and deserves recognition and support for that.

Core Devices and Rebble negotiated an agreement where Core would pay $0.20/user/month to support Rebble services. But the agreement broke down after over the following disagreement.

Rebble believes that they ‘100%’ own the data of the Pebble Appstore. They’re attempting to create a walled garden around 13,000 apps and faces that individual Pebble developers created and uploaded to the Pebble Appstore between 2012 and 2016. Rebble later scraped this data in 2017.

I disagree. I’m working hard to keep the Pebble ecosystem open source. I believe the contents of the Pebble Appstore should be freely available and not controlled by one organization.

Rebble posted a blog post yesterday with a bunch of false accusations, and in this post I speak to each of them.


Sections

Short history #

  • Dec 2016 - Pebble shut down . Some IP was sold to Fitbit. I blogged about why I think we failed. Fitbit continued to run the Pebble Appstore and web services for 1.5 years. I really appreciated that.
    • Rebble organization grew out of the official Pebble Developers Discord.
  • July 2018 , Fitbit shut down the Pebble appstore .
    • Before it shut down, Rebble (and others ) scraped all 13,000 apps and metadata from the Pebble Appstore. Rebble began hosting a copy of the appstore . They created a new Dev Portal where developers could upload new apps, roughly 500 have been uploaded since July 2018.
    • Rebble also reverse engineered many Pebble web services (weather, timeline and voice transcription) and provided them as a paid service for the Pebble community.
  • Jan 2025 - Google open sourced PebbleOS , breathing new life into the community.
  • March 2025 - I announced a new company (Core Devices) and 2 new watches - store.rePebble.com
  • November 2025 - we finished shipping out 5,000 Pebble 2 Duos . We’re working hard on Pebble Time 2. We’re aiming to start shipping in January.

Their accusations #

Accusation 1 : ‘Rebble paid for the work that [Eric] took as a base for his commercial watches’

Facts:

  • I think they’re accusing me of ‘stealing’ open source contributions to PebbleOS that Rebble paid for. This is entirely false.
  • We did not take any PebbleOS work Rebble paid for ‘as a base for [our] commercial watches’. To my best of my knowledge, Rebble never paid the developer who ported NimBLE into PebbleOS. My best guess is that they are referring to Rebble having paid CodeCoup, the company behind NimBLE , to fix some bugs that affected older non-Core Devices watches. Any Rebble-sponsored CodeCoup commits are not present in our repo. In fact, the opposite is true - we paid Codecoup $10,000 to fix multiple BLE stack issues, some of them on the host side that benefit all devices, including old Pebbles.
  • We started using our own repo for PebbleOS development because PRs on the Rebble repo reviews were taking too long. We only had one firmware engineer at the time (now we have a whopping 2!) and he felt like he was being slowed down too much. All of our contributions to PebbleOS have been 100% open source .
  • Overall, the feedback that PebbleOS could benefit from open governance is well taken. Long term, PebbleOS would be a good fit for open source organization with experience in open governance, like Apache or Linux Foundation. I wrote about this last week.
  • With our small team and fairly quick development schedule, it's true that we haven't PRed our changes into Rebble’s repo. It’s tough to prioritize this while we are busy fixing bugs and getting ready for Pebble Time 2.

Accusation 2: ‘Core took Rebble’s work’ on libpebblecommon to create libpebble3

Facts:

  • The majority (>90%) of our new open source libpebble3 library was written by Core Devices employees. The remainder comes from libpebblecommon , another open source library written by two people.

  • In April 2025, Core purchased the copyright to the libpebblecommon code from the two maintainers and incorporated it into libpebble3 * * , which is also open source * * .

  • All our contributions to libpebble3 are GPL-3.0 licensed. Here’s the motivation behind that our licensing strategy for this repo. We use the same CLA agreement as Matrix, QT and MySQL. Our CLA explicitly includes a clause that requires to Core Devices to distribute all contributions under an OSI-compatible FOSS license (e.g. GPLv3).

  • Note that neither Rebble libpebblecommon maintainer signed the Rebble blog post.

Side note regarding Cobble , I don’t think Rebble even knows this but in 2024, I personally spent over $30,000 to support its development, way before PebbleOS was open source. It was my own way to support the community.

Accusation 3: ‘ Core promised that they would let Rebble maintain and own the developer site’

Facts:

  • Nothing of the sort was agreed upon. See the full written agreement that Core Devices has with Rebble towards the bottom. Rebble agreed that Core would host the developer site.
  • I have been maintaining and updating the developer site personally - all open source . Having two sources of truth would be confusing for the community.

Accusation 4: ‘ [Eric] scraped our app store, in violation of the agreement that we reached with him previously’

Note: ‘scraping’ usually means to automated extraction of data from a website.

Fact s:

  • Here’s what happened. I wanted to highlight some of my favourite watchfaces on the Pebble Appstore. Last Monday Nov 10, after I put my kids to sleep and between long calls with factories in Asia, I started building a webapp to help me quickly go through Pebble Appstore and decide which were my top picks.
  • Let me be crystal clear - my little webapp did not download apps or ‘scrape’ anything from Rebble. The webapp displayed the name of each watchface and screenshots and let me click on my favs. I used it to manually look through 6000 watchfaces with my own eyes. I still have 7,000 to go. Post your server logs, they will match up identically to the app I (well…Claude) wrote ( source code here )
  • I integrated these picks into the Pebble Appstore on Saturday and posted about it on Sunday .

All of four of these accusations could have been clarified simply by asking me. Instead, Rebble decided to post them on their blog and threaten a lawsuit.

Backstory #

How did we get here?

Why are there dueling blog posts in the Pebbleverse?

I think most of the people are behind Rebble are great and the community overall is awesome. I know they truly mean well, but there are many aspects of the org that are severely troubling. I am very close with one of the Rebble board members, who I consider a personal friend. Over the years, I learned a lot about the organization and helped coach him through some major disputes between board members.

I exchanged literally thousands of messages with my friend on this topic over the span of 3 years. I refrained from getting too involved, despite being asked several times to join Rebble as a board member or lead the organization. I demurred - I saw how painful it was for him and I had no interest in being part of that.

Core Devices + Rebble: 2025

PebbleOS is now open source ! Yay. This is thanks to the work of many Googlers, ex-Pebblers and others - I called out (hopefully) all of them in my blog post in March. I really wanted Rebble to be a part of the Pebble revival going forward. I hired 3 people from Rebble to join Core Devices. I regularly brought up Rebble’s efforts over the years.

I engaged with Rebble folks in discussions in the spring on how we could formally work together, and then made some concrete proposals in the summer. One difficulty was that Core Devices is a business with customers and schedules. This didn’t always sync up with the timeframes of a non-profit. Things became very drawn out. It was very hard to pin people down, even on simple stuff like what the goals of Rebble as an organization were.

Regardless, I continued pushing to make Rebble a key part of the Pebble relaunch.

By August, we finally got close to an agreement.

On September 30 2025, we agreed to the following document and published respective blog posts ( ours , theres ). Core Devices would pay Rebble $0.20/user/month. I considered it a donation to a group that has done so much to support the community. But I purposely pushed for openness - no single group (Core Devices or Rebble) should be in control.

Notice the final bullet in the App store section:

All binary/metadata (including historical apps) will be published as archive file (no scraping Rebble services)

Looking back, we should have had more clear wording in this agreement. But this was after months of chat discussions and hours of Zoom calls. I honestly thought that we had reached an agreement to make the archive open, like in this message I received from a Rebble board member.

By the end of October, Rebble has changed their mind about providing an archive file.

Our disagreement #

Not withstanding their false accusations of theft, the crux of our disagreement is the archive of 13,000 Pebble apps and watchfaces that were uploaded to the Pebble Appstore in July 2018 before it was shut down.

  • I believe that these apps and watchfaces should be archived publicly and freely accessible by anyone. They should not held behind a walled garden by one organization. I repeatedly advocated for hosting this data on a neutral 3rd party like Archive.org .
  • Rebble believes ‘the data behind the Pebble App Store is 100% Rebble’ (this is a direct quote from their blog post). They repeatedly refer to all watchfaces and watchapps as ‘our data’.

This is just plainly false. The apps and watchfaces were originally uploaded by individual developers to an appstore run by a company that no longer exists. These folks created beautiful work and shared them freely with the Pebble community. I’ve spoken with numerous Pebble app developers about this. After the fall of Pebble Tech Corp, none of them envisioned one single organization claiming ownership of their work and restricting access, or charging money for access.

Let’s do the right thing - honour the original developers and create a free publicly available archive of their beautiful watchfaces and watchapps.

Our plan for the future #

It's easy to assume the worst in situations like this. But our plan for the appstore is pretty straightforward. We’re working on rewriting the appstore frontend to be native in the mobile app rather than a web view. Rebble’s appstore backend API will be the data source. Rebble’s dev portal is where developers upload apps. No subscription or Rebble account will not be required to download apps. We intend to curate how the appstore is displayed Pebble app.

We’re excited to see other Pebble-supporting mobile apps pop up - like MicroPebble and GadgetBridge , offering different features and experiences. We’d love to support these efforts with open source code or financially.

Reading things like ‘ We’re happy to let them build whatever they want as long as it doesn’t hurt Rebble ’ in their blog post worries me. Take our voice-to-text and weather features. Rebble currently offers these as part of their paid subscription. Our new Pebble mobile app includes a on-device speech-to-text feature. We’re planning to include weather for free in our app and make the data available to all watchfaces so you don’t need to configure each one separately. These features are better for users but would they ‘hurt’ Rebble? Will I need to ask permission from Rebble before building these features? It’s clear that the goals of a non-profit and device manufacturer will not always be in alignment.

Now consider the appstore. It’s a fundamental part of the Pebble experience. Even before yesterday’s accusations, I felt wary about relying too heavily on a 3rd party like Rebble to provide such a critical service. When people buy a watch from Core Devices, they expect to be able to download apps and watchfaces. If Rebble leadership changes their mind, how can I be certain I can deliver a good experience for our customers? This is one of the primary reasons I think it’s important for an archive of the Pebble Appstore to be freely available.

Rebble still has the chance to redeem itself and do right by the community #

Rebble - prove that you believe in an open, unrestricted Pebble community. Tear down the walled garden you are trying to create. Publish your copy of the Pebble Appstore archive. Stop saying that you ‘100%’ own other developers data. Let’s move on from this ridiculous sideshow and focus on making Pebble awesome!

I’ve worked hard to structure everything that we’re doing to be sustainable for the long term, and to do right by the Pebble community. I think Rebble should do the same.

I earned almost nothing from Pebble Tech Corp. I paid myself a $65,000 salary each year. I did not get any payout through the asset sale. I fought to make sure that all Pebble employees were taken care of as best as possible, and that the Pebble community would live on. I believe that at every turn, I’ve done right by the community.

I didn’t relaunch Pebble to make a lot of money. My goal this time round is to make it sustainable. I want to continue making more watches and cool gadgets. There are no investors. I am taking huge risks doing this. I relaunched it because I love Pebble and want it to live on long into the future. Generally, I am excited and positive for the future, despite everything.

For everyone else, again, I apologize for the extreme amounts of inside baseball and the better things you could be doing with your time. I’ll leave the comments open here. Please refrain from any personal attacks or vicious comments (at myself or other people) - follow the HN guidelines .

Eric Migicovsky

Microsoft Teams to let users report messages wrongly flagged as threats

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 17:14:34
Microsoft says that Teams users will be able to report false-positive threat alerts triggered by messages incorrectly flagged as malicious. [...]...
Original Article

Teams

Microsoft says that Teams users will be able to report false-positive threat alerts triggered by messages incorrectly flagged as malicious.

This new feature was first announced in September, when it entered a targeted rollout phase, and will roll out to users worldwide by the end of November 2025.

"Microsoft Teams now enables users to report messages they believe were incorrectly flagged as security threats in chats and channels," Microsoft said in a Microsoft 365 message center update.

Wiz

"It empowers users to provide feedback on false positives, helping improve detection accuracy and strengthen organizational security."

False-positive user reporting will be available to organizations using Microsoft Defender for Office 365 Plan 2 or Microsoft Defender XDR, and it will be accessible across desktop (Windows and macOS), mobile (Android and iOS), and web platforms.

Once it reaches general availability, this Teams feature will be toggled on by default; however, admins can also turn it on or off in the Teams admin center and the Microsoft Defender portal.

Teams false positive reporting
Teams false positive reporting (Microsoft)

​To toggle on user reporting for incorrect security detections, admins have to:

  1. Sign in to the Teams Admin Center at https://admin.teams.microsoft.com .
  2. In the left navigation, select "Messaging settings."
  3. Scroll down to "Messaging safety" settings.
  4. Turn on the "Report incorrect security detections" setting.
  5. Select "Save" to apply the changes.

Last week, Microsoft started rolling out a new Teams feature for Premium customers that automatically blocks screen recordings and screenshots during meetings.

In recent months, Microsoft has also announced that Teams will warn users when they send or receive private messages containing links flagged as malicious, and has begun working to enhance protection against malicious file types and URLs in Teams chats and channels.

All these features are now rolling out worldwide and are expected to reach general availability by the end of November 2025.

During last year's Enterprise Connect conference, Microsoft said that over 320 million people use Teams each month across 181 markets.

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.

Don't Let Larry Summers Back Into Polite Society

Intercept
theintercept.com
2025-11-18 17:10:08
Summers said he’s “ashamed” of his relationship with Jeffrey Epstein and would step back from public life. This time it should be for good. The post Don’t Let Larry Summers Back Into Polite Society appeared first on The Intercept....
Original Article
Former US Treasury secretary Larry Summers speaks during the World Economic Summit in Washington, DC, on April 17, 2024. (Photo by Mandel NGAN / AFP) (Photo by MANDEL NGAN/AFP via Getty Images)
Former Treasury Secretary Larry Summers speaks during the World Economy Summit in Washington, D.C., on April 17, 2024. Photo: Mandel Ngan/AFP via Getty Images

Dylan is a senior researcher at the Revolving Door Project, where she leads RDP’s Economic Media Project.

Larry Summers is the archetype of the technocratic Democratic insider. A prodigy whose abilities in the academy propelled him to powerful roles in government, he has for decades enjoyed close relationships with nearly every important figure in left-of-center politics, including advising former presidents Bill Clinton, Barack Obama, and (in an informal role ) Joe Biden. His CV gives the impression of the sort of shrewd politico who might broker an epic compromise to save the day in an episode of “The West Wing.”

Beyond the paragons of liberal society, Summers also has prodigious connections to more unsavory sorts, from financial bottom-feeders to goofy Silicon Valley founders like Jack Dorsey — and an outright criminal like human trafficker Jeffrey Epstein.

So it felt overdue when the Harvard Crimson first reported Monday night that Summers would “step back from all public commitments.” Summers was “deeply ashamed,” he told the paper in a statement, and he took “full responsibility for my misguided decision to continue communicating with Mr. Epstein.” The Harvard economist would continue teaching, he said in the statement, and he did not specify which commitments he’d be stepping back from.

Summers, it bears remembering, had been publicly chastened and made a comeback before.

He’s survived numerous scandals, including previous reporting on his connections to Epstein while serving as president of Harvard University. But nothing has laid bare his cavalier attitude toward the appearance of impropriety like the close friendship revealed in the newly released trove of Epstein’s emails. They contain frequent correspondence between the late billionaire sex criminal and both Summers and his wife , Elisa New, a literature professor emerita at Harvard. The emails shed new light on what Summers had previously told the Wall Street Journal was a relationship that “primarily focused on global economic issues.”

Indeed, the emails reveal the two men had a close relationship and discussed deeply personal issues together long after Epstein’s 2008 conviction on the charge of solicitation of a minor — up until July 5, 2019, the day before the financier’s final arrest and subsequent death. In one message, the married Summers bemoans his pursuit of an unnamed woman, to which Epstein offers his read on the situation: “shes smart. making you pay for past errors. … you reacted well.” In further reporting published Monday by the Crimson , Summers and Epstein also discussed the economist’s pursuit of a woman he reportedly referred to as a mentee, and the late financier dubbed himself Summers’s “wing man.”

Summers is a towering figure in economic discourse. The son of two economists and nephew of two Nobel laureates in the subject (his father, Robert Summers, née Samuelson, was Paul Samuelson’s brother; his mother was Kenneth Arrow’s sister), he grew up steeped in the discipline. Not to be overshadowed by his relations, Summers earned his Ph.D. from Harvard, where he became a tenured professor before turning 30, one of the youngest in the school’s history. He went on to hold posts as chief economist at the World Bank, secretary of the Treasury Department, Harvard president, and director of the National Economic Council. That’s the side of his story fit for “The West Wing.”

The undercurrent is far less flattering. While at the World Bank, Summers signed a memo that argued for dumping waste in African nations, although he later claimed it was meant to be sarcastic. As Treasury secretary, he pushed for deregulation and the repeal of the Glass–Steagall Act’s banking oversight, and, crucially, helped block regulation of over-the-counter derivatives (financial instruments traded directly between counterparties, rather than on an exchange) — a decision that ultimately contributed to the disastrous 2008 financial crash. Incidentally, Summers would go on to make millions of dollars working for banks and hedge funds.

After serving in Clinton’s Treasury Department for both terms (he started as undersecretary, then deputy secretary, and ascended to secretary when his mentor, Robert Rubin, left in 1999), Summers ascended to the Harvard presidency. His tenure was eventful. He famously clashed with Cornel West, Henry Louis Gates Jr., and the Afro-American Studies Department; multiple prominent faculty members considered leaving just months into his administration. (On-campus affairs clearly remain top of mind for Summers, whose last public tweet before the Epstein emails dropped was hand-wringing about the Crimson’s support for the Boycott, Divestment, and Sanctions movement.)

Summers caused an uproar as university president when he attempted to explain the gender imbalance in the economics profession by claiming it was the result of women being innately worse at mathematical thinking. While he has claimed this was taken out of context, one of his email exchanges with Epstein showed that Summers’s disdain toward women’s intelligence hadn’t dissipated in the decade since his ouster in 2006. In the email, he snarked: “I observed that half of the IQ In [the] world was possessed by women without mentioning they are more than 51 percent of population.” Under Summers’s leadership, Harvard’s hiring of women to tenure track positions fell from 36 percent to a mere 13 percent . Since then, Summers has become a martyr of sorts for pundits , conservative and liberal alike, decrying cancel culture.

Following his supposed cancellation, Summers took a brief sojourn to Wall Street hedge fund D.E. Shaw, where he made $5.2 million in the two years of his employment at the firm, despite reportedly only working one day a week. Summers padded out his lifestyle by pulling in an additional $2.7 million in speaking fees from Wall Street banks.

With future aspirations in academia apparently limited to merely an at-large professorship at Harvard, Summers turned his eye back to politics in 2008. After advising Obama’s campaign, Summers took an influential role as director of the National Economic Council, where he was instrumental in cutting down the size of the new administration’s stimulus package. After losing out on the chairmanship of the Federal Reserve , Summers returned to Harvard, where he has remained since, while still exerting his influence in the world of politics. He was in the running for a return to the Treasury in the Biden administration, and publicly railed against Covid-19 stimulus checks.

The illustrious deregulator has advised or sat on boards for dozens of companies, including predatory lenders, Wall Street behemoths, and cryptocurrency cons.

Naturally, he also hasn’t been left wanting for lucrative opportunities in the private sector, often explicitly renting out his reputation to corporations. The illustrious deregulator has advised or sat on boards for dozens of companies, including predatory lenders, Wall Street behemoths, and cryptocurrency cons . He worked for Genie Energy while the firm was drilling in the Golan Heights , the illegal Israeli settlement in Syria. He’s also advised CitiBank and Marc Andreessen’s a16z .

On at least three separate occasions , Summers has left a company shortly before they faced investigation. In 2018, he left LendingClub less than a month before the Federal Trade Commission sued the fintech company, charging it with deceptive practices. (The FTC announced in July 2021 that LendingClub would pay $18 million to settle the charges.) He left Digital Currency Group at some point in 2022; the firm’s website listed him as an adviser until November 2022. However, while following up on calls for more transparent disclosure from Summers, Protos reported he had left earlier than that. In any event, the crypto company was hit with a joint SEC/Justice Department probe in January 2023, followed by a lawsuit from New York Attorney General Letitia James in October. (In January, the SEC announced the company would pay $38.5 million in civil penalties.) On February 9, 2024, he abruptly resigned from Block (formerly Square), just one week before they faced investigation from federal regulators. (In January 2025, Block was hit with $255 million in penalties from the Consumer Financial Protection Bureau and 48 states.)

Still, when Sam Altman faced mutiny from inside OpenAI and fired the entire board, it was Larry Summers to whom he turned for help consolidating his control and appeasing investors.

On top of all of this advising and a full professorship, Summers has still found time to be a columnist at the Washington Post, a regular Bloomberg contributor, and an omnipresent source for the journalism elite. Despite this lack of work-life balance, Summers also amazingly managed to make headlines in 2023 by calling for unemployment to increase to combat inflation , set against the backdrop of a tropical locale.

Summers has spent decades enjoying the finer things of life inside the D.C. Beltway: power, fame, millions of dollars, multiple flights on Epstein’s private plane. Over that time frame, he has brought ruin to our financial system, destroyed American manufacturing , helped stop student debt relief, hampered the recovery from the Great Recession, and helped ensure that economic policy serves the interests of capital holders and not workers . In short, we live in a hell made possible in no small part by Summers’ influence.

After years of maintaining a close relationship with a known sex trafficker, he is still teaching undergraduates at Harvard.

The antifeminist writer Helen Andrews recently highlighted Summers’s case as an example of the failures of “cancel culture.” In a way she’s right: Cancel culture failed spectacularly to excise Summers from positions of influence. After years of maintaining a close relationship with a known sex trafficker (which has been public knowledge for years), he is still teaching undergraduates at Harvard. He was advising presidents and senators as recently as 2023. His “cancellation” was not even enough to preclude his consideration for a Cabinet post that would have put him fifth in line for the presidency (again).

As much as any single person can, Summers embodies the most odious qualities of the political elite and the scorn they show for basic human well-being. We don’t need to be getting our policy insights from a pedophile-adjacent, ethically conflicted nepo baby. His advice isn’t worth it. He was wrong about our recent bout of inflation . He was wrong about bank deregulation. He was wrong about free trade agreements. He was wrong about fiscal stimulus. He even lost Harvard nearly $2 billion as president.

We should demand much more from our economists, policymakers, and leaders. Indeed, making a more humane, responsive government will depend on it.

French agency Pajemploi reports data breach affecting 1.2M people

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 16:59:27
Pajemploi, the French social security service for parents and home-based childcare providers, has suffered a data breach that may have exposed personal information of 1.2 million individuals. [...]...
Original Article

French agency Pajemploi reports data breach affecting 1.2M people

Pajemploi, the French social security service for parents and home-based childcare providers, has suffered a data breach that may have exposed personal information of 1.2 million individuals.

The incident impacts registered professional caregivers working for private employers, typically parents using the Pajemploi service part of URSSAF - the French organization that collects social security contributions from employers and individuals.

"The Pajemploi service has been the victim of a theft of personal data belonging to employees of private employers using the Pajemploi service," reads the announcement from the agency.

Wiz

"This cyberattack, detected on November 14, could have affected up to 1.2 million employees of private employers using the Pajemploi service," the public service says.

According to the French agency, the data potentially exfiltrated includes the following types:

  • full names
  • place of birth
  • postal address
  • social security number
  • name of the used banking institution
  • the Pajemploi number
  • accreditation number

Pajemploi's disclosure highlights that the hackers did not have access to bank account numbers (IBANs), email addresses, phone numbers, or account passwords.

Each person affected by the cybersecurity incident will be notified by Pajemploi individually.

Pajemploi also stated that the incident has not impacted its operations, and services such as the processing of submitted declarations or payment of salaries continue uninterrupted.

The agency notes that after detecting the breach, it took immediate action to stop the attack and protect its information systems. The organization also notified the French Data Protection Authority (CNIL) and the National Agency for the Security of Information Systems (ANSSI).

URSSAF recommends that everyone be extra cautious due to the elevated risk of fraudulent emails, SMS, or phone calls targeting them using the stolen information.

BleepingComputer has contacted URSSAF with a request for more information about the incident and whether there is a ransom demand from the threat actor, but we received no reply. We will update the article when we hear back.

At publishing time, no ransomware group has claimed the attack on Pajemploi.

In March 2024, France Travail, formerly Pôle Emploi, the agency responsible for registering unemployed individuals and providing employment assistance, suffered a data breach that exposed the personal data of 43 million individuals in the country.

Over the weekend, Eurofiber France disclosed that hackers breached its network on November 13 and stole customer data from its ticket management platform.

Wiz

The 2026 CISO Budget Benchmark

It's budget season! Over 300 CISOs and security leaders have shared how they're planning, spending, and prioritizing for the year ahead. This report compiles their insights, allowing readers to benchmark strategies, identify emerging trends, and compare their priorities as they head into 2026.

Learn how top leaders are turning investment into measurable impact.

Strix Halo's Memory Subsystem: Tackling iGPU Challenges

Hacker News
chipsandcheese.com
2025-11-18 16:41:34
Comments...
Original Article

Editor’s Note (11/2/2025): Due to an error in moving the article over from Google Docs to Substack, the “Balancing CPU and GPU Bandwidth Demands” section was missing some Cyberpunk 2077 data. Apologizes for the mistake!

AMD’s Strix Halo aspires to deliver high CPU and GPU performance within a mobile device. Doing so presents the memory subsystem with a complicated set of demands. CPU applications are often latency sensitive with low bandwidth demands. GPU workloads are often latency tolerant and bandwidth hungry. Then, multitasking requires high memory capacity. Mobile devices need low power draw. Finally, the whole package has to fit within a price tag acceptable to consumers. Investigating how AMD dealt with those challenges should make for a good time.

ASUS has kindly sampled the ROG Flow Z13, which implements Strix Halo in a tablet form factor with 32 GB of LPDDR5X. They’ve made deep dives like this possible, and we greatly appreciate their support.

RX 7600 results were provided by Azralee from the Chips and Cheese Discord.

Strix Halo’s GPU uses a similar cache setup to AMD’s older and smaller mobile chips. As on Strix Point and Hawk Point (Zen 4 mobile), Strix Halo’s GPU is split into two Shader Arrays. Each Shader Array has 256 KB of L1 mid-level cache, and a 2 MB L2 services the entire GPU. Latencies to those GPU-private caches are in line with other RDNA3 and RDNA3.5 implementations. AMD likely kept L2 capacity at 2 MB because a 32 MB memory side cache (Infinity Cache, or MALL) takes over as the GPU’s last level cache.The L2 only has to catch enough traffic to prevent the Infinity Cache from getting overwhelmed. The resulting cache setup is similar to the one in the RX 7600, a lower midrange RDNA3 discrete card.

The Infinity Cache on Strix Halo has slightly higher latency compared to implementations in AMD’s discrete cards. DRAM latency from the GPU is higher as well. Compared to AMD’s other mobile CPUs with iGPUs though, the 32 MB Infinity Cache offers a large cache capacity increase.

Nemes’s Vulkan bandwidth test achieves just under 1 TB/s from Infinity Cache. The figures align well with performance counter data. Taken together with the chip’s 2 GHz FCLK, bandwidth test results suggest the GPU has a 512B/cycle path to the interconnect. If so, each of the GPU’s eight Infinity Fabric endpoints has a 64B/cycle link.

As a memory side cache, Infinity Cache can theoretically handle any access to physical addresses backed by DRAM. In an earlier interview with Cheese (George), AMD indicated that Infinity Cache was focused on the GPU, and that its behavior could change with firmware releases. Some of that change has happened already. When I first started testing Strix Halo just after Hot Chips 2025, results from my OpenCL microbenchmarks reflected Infinity Cache’s presence. I used that OpenCL code to figure out Data Fabric performance events. But PMU data collected from games suggested Infinity Cache wasn’t used once a game went into the background. Hardware doesn’t know whether a process is running in the foreground or background. That’s something the operating system knows, and that info would have to be communicated to hardware via drivers. Therefore, Infinity Cache policy can change on the fly from software control.

From early data collected on 9/1/2025

At that time, Nemes’s Vulkan-based code didn’t reflect Infinity Cache’s presence. PMU data showed a match between CS and UMC traffic, indicating the microbenchmark wasn’t taking advantage of Infinity Cache rather than the cache struggling with the access pattern. I was in the middle of investigating what Infinity Cache did or didn’t apply to when Windows updated. Then, foreground/background status no longer had any effect. Nemes’s Vulkan code was also able to observe the Infinity Cache.

Early observations on Infinity Cache behavior aren’t relevant today, but they do show Infinity Cache’s behavior is influenced by factors beyond a memory request’s origination point. Not all GPU requests install into the cache, and AMD can change cache policy on the fly. AMD could tune behavior with future updates too.

One early observation from OpenCL remained consistent though. Infinity Cache isn’t used for a buffer created with the CL_MEM_ALLOC_HOST_PTR flag and managed with zero-copy map/unmap APIs. CL_MEM_ALLOC_HOST_PTR requests an allocation from host-visible memory. On systems with discrete GPUs, AMD tends to handle that by allocating memory from DRAM attached to the CPU.

Intuitively, that flag shouldn’t make a difference on integrated GPUs. I’m not sure why it affects Infinity Cache behavior. Perhaps Strix Halo splits address ranges for the CPU and GPU under the hood, and the CPU’s address ranges aren’t cacheable from the Infinity Cache’s perspective.

AMD’s discrete Radeon RX 9070 shows similar behavior, with Infinity Cache not being used for host-side memory. Latency to host memory goes up to nearly a microsecond on RX 9070, while it remains unchanged on Strix Halo. Integrated GPUs have an advantage with zero-copy compute code, and it shows.

To further check zero-copy behavior, I have a test that allocates a 256 MB buffer using OpenCL’s Shared Virtual Memory APIs and only modifies a single 32-bit value. Strix Halo supports fine-grained buffer sharing like other recent AMD GPUs, meaning applications can use results generated from the GPU without calling map/unmap functions.

Strix Halo shows low latencies in line with zero-copy behavior. It’s worth noting that not all integrated GPUs can avoid a copy under the hood.

Copy APIs like clEnqueueReadBuffer and clEnqueueWriteBuffer are still relevant, because they’re the traditional way to work with discrete GPUs. Those APIs often use the copy queue and DMA engines, which handle data movement without involving general purpose compute units. Strix Halo can achieve high copy bandwidth in the CPU to GPU direction, but not the other way around.

Performance counter data suggests copies to the GPU don’t go through the Infinity Cache. During a copy, the shared memory controllers should observe both a read from CPU-side memory and a write to GPU-side memory. But there’s nowhere near 100% overhead compared to software measurements.

Bandwidth is lower in the other direction, but curiously CS-level bandwidth is similar. The memory controllers see less bandwidth, indicating some requests were handled on-chip, likely by Infinity Cache. Curiously, there’s way more than 100% overhead when comparing PMU data to software-visible copy bandwidth.

Strix Halo’s CPU side superficially resembles AMD’s flagship desktop parts, with 16 Zen 5 cores split across two Core Complex Dies (CCDs). However, these CCDs use TSMC’s InFO_oS for connectivity to the IO die rather than on-PCB traces. The CCD has 32B/cycle of bandwidth to the system in both the read and write directions.

Therefore, Strix Halo’s CCDs have more bandwidth at the die boundary than their desktop counterparts, but only in the write direction. It’s an advantage that’s likely to have minimal impact because reads often outnumber writes by a large margin.

Other CPU chiplet designs have more bandwidth at die boundaries, including the Compute Tile on Intel’s Meteor Lake and AMD’s own “GMI-Wide” configuration. GMI-Wide uses two links between the CCD and IO die to maximize cross-die bandwidth in lower core count server chips. Even though GMI-Wide doesn’t use advanced packaging, it has significantly more cross-die bandwidth than Strix Halo.

Add = adding a constant to an array, creating a read-modify-write pattern with an equal amount of reads and writes. NT Write = non-temporal writes. These bypass cache and in many CPUs trigger a special case that avoids read-for-ownership for cachelines that are entirely overwritten

In a loaded latency test with reads, a Strix Halo CCD can reach high bandwidth levels at lower latency than standard GMI-Narrow CCDs. Part of that is likely down to its high bandwidth LPDDR5X setup, which a single CCD can’t come close to saturating. But that advantage doesn’t come through until bandwidth loads pass 45-55 GB/s. Before that, LPDDR5X’s high baseline latency puts Strix Halo at a disadvantage. At very high bandwidth load, Intel Meteor Lake’s higher cross-die bandwidth keeps it ahead. AMD’s GMI-Wide setup shows what a bandwidth-focused cross-die link can do, providing excellent bandwidth at low latency.

Bringing both CCDs into play gives Strix Halo a lead over Meteor Lake. I’m starting the test by placing bandwidth load on CCD1 while running the latency test on CCD0. That gives lower latency at bandwidth loads below 60 GB/s because contention at the CCD interface is taken out of the picture. Latency does increase as I spread bandwidth load across both dies, and rises beyond 200 ns as the test approaches die-to-die bandwidth limits. However, a read-only pattern is still limited by cross-die bandwidth and falls far short of the 256 GB/s that the LPDDR5X setup is theoretically capable of.

Advanced packaging may provide latency benefits too. Regular AMD CCDs use SerDes (serializer-deserializer) blocks, which convert signals for transport over lower quality PCB traces. Zen 2’s Infinity Fabric On-Package (IFOP) SerDes for example uses 32 transmit and 40 receive lanes running at a very high clock. Forwarded clock signals per lane data bundle help tackle clock skew that comes up with high speed parallel transmission over wires of unequal lengths. CRC helps ensure data integrity.

All of that adds power and latency overhead. Strix Halo’s InFO_oS packaging doesn’t require SerDes. But any latency advantage is difficult to observe in practice. DRAM requests are the most common type of off-CCD traffic. High LPDDR5X latency masks any latency advantage when looking at DRAM requests, as shown above. Cache coherency traffic is another form of off-CCD traffic, and doesn’t involve DRAM. However, testing that with a “core to core latency” test that bounces cachelines between core pairs also doesn’t provide favorable results for Strix Halo.

A run that produced good cross-CCX latencies

AMD handles cross-CCX cache coherency at Coherent Stations (CS-es) that sit right in front of the memory controllers. Memory traffic is interleaved across memory channels and thus CS instances based on their physical address. I try hitting different physical addresses by testing with various cacheline offsets into a 4 KB page, which gives me different combinations of L3 slices and memory controller + CS pairs. Values within a single run reflect variation based on the tested core pair, while different runs display variation from different memory subsystem blocks owning the tested address.

A run on the worse end with respect to cross-CCX latencies, likely hitting a CS farther away from the CPU endpoints

Cross-CCX latencies on Strix Halo land in the 100-120 ns range depending on the location of the tested core pair, responsible L3 slice, and responsible CS. It’s significantly higher on typical desktop systems or prior mobile chips from AMD. For example, the Ryzen 9 9900X tends to have cross-CCX latencies in the 80-90 ns range, which is in line with prior Zen generations. It’s about 20 ns faster than Strix Halo.

Therefore, I don’t have a satisfactory answer about Strix Halo’s cross-die latency. Latency may indeed be lower at die boundaries. But everything past that boundary has higher latency compared to other client systems, making any advantage invisible to software.

Sharing a memory controller across the CPU and GPU comes with advantages, like making zero-copy behavior more natural to pull off. But it comes with challenges too. CPU and GPU memory requests can contend with each other for DRAM access. Contention surfaces as higher latency. From Zen 4 onward, AMD’s L3 performance monitoring unit (PMU) can measure average latency in nanoseconds for requests external to the core cluster. PMU data isn’t directly comparable to software measurements, because it only accounts for latency after the point of a L3 miss. But it is consistent in slightly under-estimating software observed latency when running a simple latency microbenchmark. When gaming, I typically see low CPU bandwidth demands and correspondingly mild latency increases over the baseline .

The same doesn’t hold true when gaming on Strix Halo’s integrated GPU. Latency rises far above the baseline of around 140 ns. I logged average latency over 1 second intervals, and many of those intervals saw latency figures around 200 ns across several games

I wrote a microbenchmark to investigate how CPU memory latency is impacted by GPU-sde bandwidth load. As with the CPU loaded latency test, I run a latency test thread on a CPU core. But instead of using a read-only pattern, I do a standard C=A+B computation across large arrays on the GPU. To control GPU bandwidth load, I can have each OpenCL kernel invocation do more math with A and B before writing the result to C. Results show increased latency at higher GPU bandwidth demands. Other recent iGPUs show similar behavior.

In-game CPU bandwidth demands are low, but not as low as a simple latency test. I tried running a couple of read bandwidth threads on top of the test above. Strix Halo seems to let its GPU squeeze out the CPU when under extreme bandwidth demands. Latency suffers, passing 300 ns at one point.

Plotting L3 and memory controller PMU data with 1 second intervals helps capture the relationship between latency and bandwidth usage in more complex workloads. The points don’t track well with microbenchmark data collected with a single CPU-side latency test thread. Perhaps there’s enough CPU-side bandwidth demand to cause contention at both the die-to-die interface and the memory controllers. Or maybe, CPU and GPU bandwidth spikes tend to line up within those 1 second intervals. Whatever the case, PMU data highlights how Strix Halo’s CPU cores need high cache hitrates more than their desktop counterparts.

Cyberpunk 2077’s built-in benchmark is largely CPU bound when run at 1080P with medium settings and no upscaling. I used Intel’s Arc B580 on desktop systems, since it has vaguely similar compute power to Strix Halo’s iGPU. Results show a large gap between Strix Halo and AMD’s desktop platform, even though both use the same Zen 5 cores.

Memory latency under load is largely not a problem with CPU-only workloads, even when considering heavily multithreaded ones. Total bandwidth demands are much lower and actually well within the capabilities of a 128-bit DDR5 setup. That explains why AMD was able to take on quad channel HEDT parts using a desktop dual channel platform back in the Zen 2 days. Good caching likely played a role, and Strix Halo continues to have 4 MB of last level cache per core. PMU data from Cinebench, code compilation, and AV1 video encoding loosely align with microbenchmark results. Latency barely strays above the baseline. Y-Cruncher is an exception. It’s very bandwidth hungry and not cache friendly. Its bandwidth demands are several times higher, and often go beyond a dual channel DDR5-5600 setup’s capabilities. Strix Halo is a good choice for that type of workload. But in the client space, bandwidth hungry CPU applications tend to be exceptions.

Observations above suggest Strix Halo’s Infinity Fabric and DRAM setup focuses on feeding the GPU and as a result the CPU gets the short end of the stick. High Infinity Fabric endpoint count and a wide LPDDR5X bus provide high bandwidth at high latency. CPU workloads tend to be latency sensitive and contention can make that even worse.

Strix Halo shows AMD can move hundreds of gigabytes per second over Infinity Fabric within mobile power budgets. It’s impressive in that respect, but CPU-side latency is high

Other aspects of the memory subsystem de-prioritize the CPU as well. CPU accesses don’t fill into that cache, but still do a lookup likely to maintain cache coherency with the GPU. That cache lookup at the very least costs power and might add latency, even though it’ll almost never result in a hit. Lack of GMI-Wide style bandwidth is another example.

ASUS’s ROG Flow Z13 placed next to the original Surface Pro for comparison. A hypothetical larger iGPU would be difficult to accommodate in such a form factor, and would face stiff competition from discrete GPU setups

AMD’s decisions are understandable. Most client workloads have light bandwidth requirements.Strix Halo’s memory system design lets it perform well in portable gaming devices like the ROG Flow Z13. But it does make tradeoffs. And extrapolating from those tradeoffs suggests iGPU designs will face steeper challenges at higher performance tiers.

For its part, Strix Halo strikes a good balance. It enjoys iGPU advantages without being large enough for the disadvantages to hurt. I hope AMD continues to target Strix Halo’s market segment with updated designs, and look forward to seeing where they go next.

If you like the content then consider heading over to the Patreon or PayPal if you want to toss a few bucks to Chips and Cheese. Also consider joining the Discord .

Inside Rust's std and parking_lot mutexes - who wins?

Lobsters
blog.cuongle.dev
2025-11-18 16:38:15
Comments...
Original Article

A while ago, our team was working on a Rust project where std::sync::Mutex was everywhere. A team member suggested switching to parking_lot::Mutex instead. They heard that it has better performance, smaller memory footprint, and more predictable behavior under contention.

I had no idea how to evaluate this claim. A quick search online returned results favoring parking_lot. This felt wrong to me. Why? It contradicted my belief that std should be the gold standard. The standard library team knows what they’re doing, right? And if parking_lot’s mutex really was the performance winner, there had to be trade-offs between the two implementations that people weren’t talking about.

That mystery haunted me. I couldn’t just take it on faith. So I jumped down the rabbit hole: read both implementations, wrote the benchmarks, and here we are. In this post, I will:

  • Explain how std implements the mutex (v1.90.0)

  • Explain how parking_lot implements their mutex (v0.12.5)

  • Show you the benchmark with key findings

  • Give you a decision guide for when to use each

But first, let’s ground our foundation on mutexes (skim it if you’re already familiar).

A classic example of the kind of problem that mutex solves is withdrawing and receiving money at the same time. Imagine you have $100 in your account. Thread A tries to withdraw $80, and Thread B tries to deposit $50. Without proper synchronization, both threads might read the balance as $100 simultaneously, then write back their results independently:

Mutex solves this nicely by having a thread wait until the other finishes its update:

The operations that read and write the balance are what we need to protect - these are called critical sections . Any code that accesses shared data needs to be inside a critical section, guarded by a mutex.

Simple enough, right? Now let’s see how to use a mutex. (Again, skim this if it’s too basic for you)

In languages other than Rust, you typically declare a mutex separately from your data, then manually lock it before entering the critical section and unlock it afterward. Here’s how it looks in C++:

The problem? Nothing stops you from accessing account without locking the mutex first. The compiler won’t catch this bug.

Rust takes a completely different approach - the mutex wraps and owns the data:

Three things to pay close attention to:

  • The mutex wraps the data : This makes it impossible to access account without holding the lock. The compiler enforces this.

  • Automatic unlock : When you lock, you receive a guard. When the guard goes out of scope, it automatically unlocks. No manual cleanup needed.

  • Lock can fail : Notice the .unwrap() on .lock() ? It returns a Result because locking can fail due to poisoning. I’ll explain this shortly.

That’s enough of the basics. Let’s have some fun. Here is how mutex is implemented, starting with Rust std.

A quick look into std::Mutex gives us this

  • data is the easy part. Since the mutex enforces exclusive access, Rust uses UnsafeCell to give you safe mutable access once it’s locked.

  • poison is just Atomic<bool> flag to tell if the last thread acquired the lock panic. So, maybe you would want to handle it next time you lock it.

inner is interesting part, let’s look at it next. The code is really long, so I just show a simplified version here

The main idea is that for different OS (and OS version), Rust uses different Mutex implementation. However, we can divide these implementation to 2 big groups: Futex and other platform primitive.

  • Futex (short for “fast userspace mutex”) is used where the OS kernels expose a “wait on this address” API. We will dive deeper into this one soon.

  • When that API is missing, Rust falls back to the best available platform traditional locks.

(I’m in awe btw - that’s a lot of different implementations. Writing and maintaining all this platform-specific stuff must be exhausting. Major respect to whoever’s doing this.)

Since Futex is the most used and is quite a typical implementation for Mutex. Let’s look inside it.

At its heart, futex is just an atomic u32 (simplified here):

Atomic type has this powerful operation called “Compare And Swap” (CAS) where the CPU executes it in an atomic call . It works by first comparing the value, if the value equals what is asked for, then it sets the value.

In other words, if we use value 0 for Unlocked state, and 1 for Locked state, we have a simple mutex where the thread can simply try to compare the state to 0 (Unlocked), and set to 1 (Locked). If the state is currently 1, keep doing that until successful.

So, a simplified version of mutex look like this:

But you might ask: if the first thread holds the lock for a long time, then the second thread needs to keep trying (like a infinite loop)? How about if there are hundreds or thousands of them? Maybe the CPU will soon be burnt.

Of course, there is the solution to this problem. In real implementation, Rust futex has 3 states:

  • 0: Unlocked

  • 1: Locked

  • 2: Contended - locked, but there are waiter.

Notice the Contended state? A thread will try its best to acquire the lock. But if it can’t, it will mark the lock as contended and go to sleep, waiting for the process to wake it up when the mutex is released.

Note: See that spinning part in the diagram? Before a thread goes to sleep, it keeps checking the lock state for about 100 loops - if it becomes unlocked, the thread immediately tries CAS. This avoids the expensive syscall if the lock is released quickly.

What happens when a thread goes to sleep? The kernel helps us put these sleeping threads into a queue. Take a look at the system call on Linux and Android to put the thread into sleeping state (this is usually called “park a thread”):

The key part is futex as *const Atomic<u32> - you give the kernel a memory address, and it queues your thread there. Later, when you want to wake a thread, you give the kernel that same address, and it dequeues and wakes a sleeper.

When a thread finishes, it sets the state to unlocked. If the state was contended, it wakes one waiting thread via syscall. This continues until the queue empties.

The final piece of std’s mutex is poisoning, a unique feature you won’t find in most other languages.

One unique feature of Rust’s standard mutex is poisoning. When a thread panics while holding a lock, the mutex becomes “poisoned.” Any subsequent attempts to lock it will return an Err(PoisonError) , but you still get the guard inside the error:

How does poisoning work? Conceptually, it happens in the MutexGuard::drop() path (simplified here for clarity):

The guard captures whether the thread was panicking when the lock was acquired. If we weren’t panicking then but we are now, a panic must have occurred in the critical section. The mutex is marked as poisoned with a simple atomic store.

This is a “best effort” mechanism. It won’t catch all cases (like double panics or non-Rust exceptions), but it provides a useful safety net. The key insight is that you still get access to the data even if the mutex is poisoned, allowing you to inspect and potentially recover from the corrupted state.

Big note: mutex poisoning gets both love and hate. It catches data corruption but feels awkward compared to other languages (Me too, I know it’s helpful. But I still hate it, lol). The Rust team is adding a non-poisoning variant - see issue#134645

parking_lot takes a fundamentally different approach. Two key differences:

  • std uses different mutex implementations per platform. parking_lot uses one algorithm everywhere, calling platform-specific code only for sleep/wake.

  • std’s queues live in the kernel. parking_lot manages its own queues in user space via a global hash table.

parking_lot’s mutex is remarkably small:

Why can parking_lot use just one byte while std needs more? It comes down to how queues work.

std’s futex uses the kernel to manage wait queues. When you call the futex syscall, you pass the memory address of your atomic variable, and the kernel uses that address as the queue ID. But there’s a catch: the kernel requires this address to be aligned to a 32-bit boundary. So std’s mutex must use AtomicU32 , even though it only needs a few bits for state.

parking_lot manages its own queues in user space. It hashes the mutex’s memory address to find the right queue bucket. Since it doesn’t need to satisfy kernel alignment requirements, it can use a single AtomicU8 .

More states for queue bookkeeping

Using separate bits gives parking_lot four possible states:

  • 00 : Unlocked, no waiters

  • 01 : Locked, no waiters

  • 10 : Unlocked, but threads still waiting

  • 11 : Locked with waiters

That third state ( 10 ) might seem odd at first. Why would a mutex be unlocked but still have waiting threads? This is a transient state that happens during parking_lot’s unlock process. Because parking_lot manages its own queue, it uses the PARKED_BIT as bookkeeping to track whether threads are still in the queue. This helps avoid lost wakeups where a thread might miss its notification. It’s not an advantage over std, just a consequence of managing queues in user space rather than delegating to the kernel.

When a thread can’t acquire the lock, it needs somewhere to wait. This is where parking_lot’s global hash table comes in.

Instead of each mutex maintaining its own queue (like kernel futexes do), parking_lot uses a single global hash table shared by all mutexes in your program. When a thread needs to wait:

  1. Hash the mutex’s memory address to find a bucket in the global table

  2. Add the thread to the bucket’s wait queue

  3. Go to sleep

Being able to manage the thread queue itself is important for parking_lot to enforce fairness. As you can see right away in the next section.

Here’s where parking_lot differs from std in behavior. std’s futex uses a “barging” strategy where any active thread can grab the lock when it’s released, even if others have been waiting in the queue longer. This maximizes throughput but can cause starvation.

When a thread unlocks, there are two sources of threads that can lock again:

  • An active thread that is calling for locking

  • A sleeping thread in the queue

As you can see, the active thread will tend to win the fight of “who locks first”. So if a thread keeps calling for lock, finishes its work, then locks right away, it keeps all other threads starved.

As you can see, thread A keeps grabbing the lock immediately after releasing it. Threads B and C do get woken up by the syscall, but by the time they try to acquire the lock, thread A has already grabbed it again. They’re completely starved.

parking_lot implements “eventual fairness” to prevent this.

Each bucket in the hash table has a timer that fires approximately every 0.5 milliseconds. When the timer fires, the next unlock becomes a “fair unlock”:

  1. The unlocker keeps the LOCKED_BIT set

  2. The woken thread receives the lock directly (a “handoff”)

  3. That thread owns the lock immediately without racing with other active threads

So this means, instead of letting anyone who is fast grab the lock, parking_lot forces the lock to be given directly to the next one in the queue (it keeps the LOCKED_BIT set and hands off; it doesn’t even unlock).

This timer-based approach means parking_lot is unfair most of the time (for performance), but guarantees fairness every ~0.5ms to prevent any thread from being starved indefinitely. You can also force a fair unlock explicitly with unlock_fair() if needed.

This eventual fairness technique from parking_lot is pretty clever, isn’t it?

You might wonder by now: how does parking_lot put threads to sleep without storing a 32-bit futex word inside every mutex? The answer is thread-local storage.

parking_lot still uses the same futex syscall as std, but it doesn’t pass the mutex address to the kernel. Instead, each thread owns a ThreadData record containing a reusable i32 . Because this integer lives in thread-local storage, each thread has a unique address to use for the syscall. In the kernel, every blocked thread sits in its own single-entry queue.

The code looks almost identical to std’s futex path. The only difference? parking_lot points the syscall at the thread-local integer instead of the mutex:

When unlocking, parking_lot uses the same per-thread integer and issues FUTEX_WAKE on that address. This way, the mutex stays just one byte while the thread-local helper handles all the sleeping and waking.

On platforms without futexes, the parker swaps in the local blocking primitive ( pthread_cond_t on macOS/BSD, Windows’ WaitOnAddress /keyed events, or a plain spin loop) while keeping the same per-thread queue design.

Now let’s see how these implementations perform in practice. I ran benchmarks across four scenarios that reveal different aspects of mutex behavior. All benchmarks ran on Linux with the futex backend for std. You can find the source code and full report at https://github.com/cuongleqq/mutex-benches .

For each scenario, you’ll see:

  • Per-thread operation counts : Shows how many lock acquisitions each thread completed

  • Performance metrics : Throughput, wait latencies (median, mean, P99), and standard deviation

  • Analysis : What the results tell us about each mutex’s behavior

(If the numbers feel overwhelming, just read the scenario configuration and skip straight to the takeaway .)

Configuration: 4 threads, 10 seconds, minimal work in critical section

Scenario: This simulates a typical application where threads frequently acquire and release locks with very little work inside the critical section. Each thread simply increments a counter, representing the common case of protecting small data structures or quick state updates.

Takeaway: In moderate contention with short critical sections, std’s futex shines with 9% higher throughput and lower average latency. The uncontended fast path and efficient kernel-managed queues work well here. However, look at the per-thread operations: std has 5.6% variation (20.6M vs 19.4M) while parking_lot has only 3.9% (18.9M vs 18.2M). Even in this favorable scenario for std, parking_lot’s fairness mechanism ensures more even work distribution across threads.

Configuration: 8 threads, 10 seconds, 500µs sleep while holding lock

Scenario: This tests heavy contention where threads hold the lock for a long time (500 microseconds). This simulates scenarios like I/O operations, slow computation, or accessing remote resources while holding a lock. With 8 threads competing for a lock that’s held for 500µs each time, contention is severe.

Takeaway: This benchmark reveals std’s critical weakness under heavy contention. Look at thread 3 in std: it completed only 66 operations while thread 5 completed 1,394. That’s a 95.3% variation - complete starvation. The extremely low median (125ns) combined with massive standard deviation (188.73ms) shows most lock attempts are fast, but some threads suffer extreme delays and essentially never get the lock.

parking_lot tells a different story. Every thread completed 860-877 operations (1.9% variation). The fairness mechanism worked exactly as designed. Yes, parking_lot has 7.5% lower throughput and higher median wait time, but that’s because it’s ensuring all threads make progress. The 51x more stable wait times (3.67ms vs 188.73ms standard deviation) show the predictability benefit. When fairness matters, parking_lot prevents the pathological starvation that std exhibits.

Configuration: 8 threads, 15 seconds, 200ms active / 800ms idle

Scenario: This simulates bursty workloads where threads alternate between periods of high activity (200ms of rapid lock acquisitions) and idle periods (800ms sleep). Think of web servers handling traffic spikes, batch processing systems, or applications with periodic activity patterns. This tests how mutexes handle sudden contention spikes followed by quiet periods.

Takeaway: parking_lot excels in bursty workloads, achieving 18.5% higher throughput than std. During activity bursts, all 8 threads compete intensely for the lock. parking_lot’s adaptive spinning and fairness mechanisms handle these periodic spikes better, ensuring more even work distribution (9.9% variation vs 13.6%). The 24.8% more stable wait times show parking_lot handles the transitions between idle and active periods more smoothly. While std has lower tail latencies, parking_lot’s better stability and fairness during bursts translate to higher overall throughput.

Configuration: 6 threads, 15 seconds, one thread monopolizes (sleeps 500µs while holding lock)

Scenario: This tests the worst-case scenario: one “hog” thread repeatedly acquires the lock and holds it for 500µs, while 5 other threads compete normally. This simulates real-world situations like priority inversion, where a high-priority or busy thread keeps grabbing the lock immediately after releasing it, potentially starving other threads. Can the mutex prevent monopolization?

Takeaway: This is the smoking gun that demonstrates std’s fundamental unfairness. The hog thread completed 12,242 operations while the other threads completed only 6-16 operations each. That’s complete starvation - the non-hog threads essentially never got the lock. The 100% variation and 130ms standard deviation show the extreme unpredictability.

parking_lot’s fairness timer prevented this catastrophe. The hog still got more operations (9,168) but nowhere near monopolization. All other threads made meaningful progress (7,023-7,109 operations). The result: 261.6% higher overall throughput because all 6 threads contributed work instead of 5 threads sitting idle. The 120x more stable wait times (1.09ms vs 130.76ms) show parking_lot’s predictability. The 0.5ms fairness timer does exactly what it promises: prevent any thread from monopolizing the lock indefinitely.

After diving deep into the implementations and running comprehensive benchmarks, here’s when to use each:

  1. You need zero dependencies - It’s in std, always available

  2. Low to moderate contention with short critical sections - futex implementation is excellent here (9% faster throughput in our short-hold test)

  3. You want poisoning for debugging - Helps catch panic-related bugs during development

  4. Platform-specific optimizations matter - Gets priority inheritance on Fuchsia, etc.

  1. Fairness is critical - Prevents thread starvation (49x better fairness in heavy contention)

  2. Risk of monopolization exists - The hog scenario showed 261.6% better throughput by preventing starvation

  3. Bursty workloads - 18.5% faster in our burst scenario

  4. You need predictable behavior - 51x more stable latency under heavy load

  5. Memory footprint matters - Always 1 byte regardless of platform

  6. You want timeouts or fairness control - try_lock_for() , unlock_fair() , etc.

  7. Cross-platform consistency is important - Same behavior everywhere

Share

The benchmarks reveal a fundamental trade-off: std::Mutex optimizes for throughput in the average case , while parking_lot::Mutex optimizes for fairness and predictability in the worst case .

For most applications, where contention is light and critical sections are short, std::Mutex performs excellently. But if your application has any of these characteristics:

  • Long-running critical sections

  • Risk of lock monopolization (e.g., one high-priority thread)

  • Need for predictable latency across all threads

  • Requirement that all threads make forward progress

Then parking_lot::Mutex’s eventual fairness mechanism becomes invaluable. The 0.5ms fairness timer is a small price to pay for preventing complete thread starvation.

If you made it this far, you’re probably as obsessed with understanding how things really work as I am. I’m Cuong, and I write about Rust and programming. If you share the same passion, I’d love to connect with you. Feel free to reach out on X , LinkedIn , or subscribe to my blog ( substack , medium ) to keep pushing the boundaries together!

Discussion about this post

Life in London With an Android Phone

Daring Fireball
www.londoncentric.media
2025-11-18 16:34:43
London Centric: Sam was walking past a Royal Mail depot in south London in January when his path was blocked by a group of eight men. “I tried to move to let them pass, but the last guy blocked the path,” the 32-year-old told London Centric. “They started pushing me and hitting me, telling me t...
Original Article

Sam was walking past a Royal Mail depot in south London in January when his path was blocked by a group of eight men.

“I tried to move to let them pass, but the last guy blocked the path,” the 32-year-old told London Centric. “They started pushing me and hitting me, telling me to give them everything.”

The thieves took Sam’s phone, his camera and even the beanie hat off his head. After checking Sam had nothing else on him, they started to run off.

What happened next was a surprise. With most of the gang already heading down the Old Kent Road, one turned around and handed Sam back his Android phone.

The thief bluntly told him why: “Don’t want no Samsung.”

“If anything I feel a bit rejected”

Anyone who has had their phone snatched knows it’s a crime that it can be over in a few seconds and leave you feeling completely helpless. One minute you’re walking down Oxford Street checking WhatsApp, the next you’re watching helplessly as your device starts its tinfoil-wrapped journey to markets in Algeria or China, possibly via an overnight stint in a flowerbed .

While it might feel like London’s phone thieves will indiscriminately take any device they can get their hands on, there are a growing number of indicators that many of them are only interested in Apple iPhones.

Multiple capital-dwelling Android users who shared their stories with London Centric said they had been on an emotional journey after they were mugged — only to have their device handed back.

One of them was Mark, who was sitting outside his workplace in Hackney when he heard the sudden whoosh of an e-bike coming from behind him at speed. Suddenly his Samsung Galaxy was lifted from his hand by a young man on the bike.

“I didn’t realise what happened immediately but as soon as I did, I went after him.” But with little chance of catching the thief on foot, Mark ultimately gave up — only to be amazed by what happened next: “I saw him stop, look at my phone, then throw it on the floor. He cycled off and I retrieved my phone.”

As the thief didn’t throw it with force there was “no damage”, he said.

That is, other than the harm caused to Mark’s ego: “If anything I feel a bit rejected. My poor phone.”

“The resale value is what thieves are most interested in”

The UK phone market is split broadly evenly, with around half of users owning an iPhone and half using an Android device, which are often cheaper.

If thefts were random — not impossible if they are carried out by someone speeding down a street at 30mph on an illegal e-bike — you’d expect the number of stolen devices to be split broadly down the middle.

Yet the limited data we do have, combined with the testimony of experts and the experience of ordinary Londoners, points towards a clear pro-iPhone bias among the capital’s thieves.

Jake Moore, an advisor for cybersecurity firm ESET, told London Centric that it’s simply because iPhones are worth more: “Apple devices have a higher secondhand market value and it makes more economic sense to pursue these more sought-after phones rather than cheaper models with a lower secondhand price,” he said.

Moore said that the security features on Androids and iPhones are similar, making it unlikely that Apple handsets are being targeted because they are easier to unlock: “Fundamentally, the resale value is what thieves are most interested in.”

Both the Metropolitan police and the City of London police, its sister force in the square mile, have been targeting the organised criminals responsible for phone theft in recent months. Neither had data to hand on the split between iPhone and Android thefts, although one police source said they were aware of thieves discarding older models of phones. The thinking is that there’s no point being charged over something with limited value.

Back in the mid 2010s, the Home Office published two reports into phone theft across the UK, as part of a research project that has since been abandoned. Buried inside the dense publications was an attempt to construct a phone theft index showing which model had a disproportionate chance of getting taken. iPhones were constantly at the top of the list, even before the development of the global criminal networks that specialise in shipping thousands of Apple devices to global markets.

“I thought he was trying to show me his music”

Sometimes the rejection of the Android device can border on farcical, especially when thieves try to use social engineering rather than violence to obtain a phone.

London Centric reader Simon was walking down Brockley high street earlier this year when a man caught his eye from across the road: “He did it in a very friendly way, as you might if you’d spotted a really good old friend who you haven’t seen in ages.”

The man “bounded across the road with this very friendly demeanour” and struck up a conversation, before asking if Simon had Spotify: “At that point I thought he was trying to show me his music, like when someone tries to sell you a CD they’ve burned of their latest musical project in a guerilla marketing way.”

“The guy was in his 20s, but there was this kid lingering behind him, just looking expectantly.”

Simon, keen to support a local musician, got out his phone to open Spotify. At this point the man saw that he was using a Samsung Galaxy, dropped all interest, and began walking away. It was then that Simon realised that, rather than this a cultural exchange, he was actually in the middle of a potential mugging that had been averted by his choice of mobile.

He overhead the would-be thief explaining to his apparent accomplice why they were giving up: “Phone’s dead, innit.”

Leave a comment

A giant Christmas artwork depicting hordes of disturbing animals and uncanny-looking people having a feast by a wintery river has recently appeared in Kingston upon Thames. What less clear is whether misfiring artificial intelligence was used to create the enormous dystopia — or if the creator simply intended to fill a wall with Hieronymous Bosch-style creations.

At first glance the artwork, which covers a massive wall above a branch of Bill’s restaurant in a shopping centre on the banks of the Thames, looks relatively normal. But zoom in and you’ll notice a one legged dog-cum-chicken, a distorted snowman standing on water, and humans blending into the bodies of animals.

After images were posted on Reddit , some of which are reproduced here with permission from the original user, London Centric immediately travelled to south west London to try to understand what was going on.

The true scale of the artwork’s strangeness, stretching across 10 metres of wall, was overwhelming.

When we visited Kingston on Monday night it was to gawp at the future of art.

The local council denied any involvement in its creation and directed our questions to the shopping centre’s owners, who have said it was “inspired by the work of Pieter Bruegel the Elder ”.

Suspicions that AI may have been involved in its creation remain unconfirmed at the time of publication. Not that it’s stopped people speculating.

Toby, a man in his early 30s who was out for an evening walk across Kingston, told us he thought it was clear what was responsible for the image: “It’s just so lazy to use AI for this.”

Another passerby, when asked for their opinion, took one look and responded “fuck sake”.

Last month London Centric reported on the toxic battle for control of the 1,200-home Loughborough Estate near Brixton. It is run by a board headed by Peter Shorinwa, who previously accused Lambeth Council of attempting to assassinate him in order to regain control of the estate.

With some residents trying to remove him from his position, Shorinwa recently spent hundreds of thousands of pounds on gifts , including branded leather goods, to be handed out to locals. Residents were due to attend an upcoming annual general meeting, where they would have the opportunity to ask management face-to-face how millions of pounds of service fees are being spent and why chemicals were allegedly stored in a community centre.

However, this weekend, Shorinwa sent a printed letter to thousands of residents telling them the meeting will instead be held on a Zoom call.

“We must not allow the devil and the people used by him to get their evil wishes across,” he explained.

Shorinwa claimed in the letter he could not meet residents face-to-face because “we don’t want any stabbing or gunshot in our Estate through the actions of some Lambeth staffs and its cohort [sic]”. He criticised “the pandemonium caused by this aggressive and vocal handful” who he said were encouraging “persons to carry out harm on fellow residents” by asking the council to intervene: “I strongly believe lives are at stake [and] for these reasons we will not be holding a face-face AGM because life is precious.”

One resident in favour of council intervention told London Centric: “It is absurd that the Loughborough Estate is still being run by the clearly deranged Peter Shorinwa.”

We R Blighty, the much-criticised fundraiser featured in a past edition of London Centric, has been yet again found guilty of illegal fundraising in the City of London. Despite this they continue to trade as usual and have been spotted all over the capital in the last week — having switched to selling a magazine to sidestep fundraising laws .

Yards away from Columbia Road flower market sits Tomlinson House, a post-war social housing block. Inside, residents are making a last-ditch bid to stop it from being privatised by a property developer who is threatening to sue them for £1.5m if they disrupt the deal.

The residents of Tomlinson Close spoke to London Centric after reading our investigation into the developer James Gold, who was described by the Royal Institute of Chartered Surveyors as a “dishonest” individual who poses a “significant risk to the public”.

On Wednesday, Tower Hamlets Council is poised to hand over ownership of the building to a company controlled by Gold. The residents are making a desperate plea for local and national politicians reading this newsletter to intervene at the 11th hour to stop the sale — and want to issue a warning to other Londoners tempted by similar offers.

London Centric is the only news outlet to have covered the activities of Gold , a legally-trained property developer with a string of failed businesses behind him. In April we revealed the legal loophole Gold is using to take control of council-owned housing blocks across London before transferring their ownership to tax haven companies controlled by his mother.

Leaseholders in Tomlinson Close were among those who signed up to one of Gold’s schemes. He promised to pay a modest cash payment and handle the legal paperwork to take ownership of their social housing block away from the local council. In return he would gain the right to build an extra floor of flats on the roof of the existing property. Now, on the eve of the deal, the residents claim Gold is trying to impose drastically worse terms at the last minute — and threatening to sue them for millions of pounds if they don’t agree. The leaseholders argue they have been duped and risk being left destitute. Gold did not reply to a request for comment on this claim.

Pam, a leaseholder who has lived in the dilapidated building for 33 years, told us: “At my age I don’t need this. Not knowing the financial implications especially when on a fixed income has caused me many sleepless nights.”

Today London Centric is removing the paywall from the original piece of reporting . Our members funded the investigation but we now want to ensure everyone can read it for free, due to the strong public interest in keeping Londoners informed about Gold’s actions.

Thanks to all the London Centric subscribers who made this edition possible. All of our best stories come from our community of readers — please do get in touch if you’ve got a tip we should be looking into via WhatsApp or email .

Leave a comment

Google Summer of Code 2025 results

Lobsters
blog.rust-lang.org
2025-11-18 16:33:39
Comments...
Original Article

As we have announced previously this year, the Rust Project participated in Google Summer of Code (GSoC) for the second time. Almost twenty contributors have been working very hard on their projects for several months. Same as last year, the projects had various durations, so some of them have ended in September, while the last ones have been concluded in the middle of November. Now that the final reports of all projects have been submitted, we are happy to announce that 18 out of 19 projects have been successful! We had a very large number of projects this year, so we consider this number of successfully finished projects to be a great result.

We had awesome interactions with our GSoC contributors over the summer, and through a video call, we also had a chance to see each other and discuss the accepted GSoC projects. Our contributors have learned a lot of new things and collaborated with us on making Rust better for everyone, and we are very grateful for all their contributions! Some of them have even continued contributing after their project has ended, and we hope to keep working with them in the future, to further improve open-source Rust software. We would like to thank all our Rust GSoC 2025 contributors. You did a great job!

Same as last year, Google Summer of Code 2025 was overall a success for the Rust Project, this time with more than double the number of projects. We think that GSoC is a great way of introducing new contributors to our community, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our GSoC project idea list and our guide for new contributors .

Below you can find a brief summary of our GSoC 2025 projects. You can find more information about the original goals of the projects here . For easier navigation, here is an index of the project descriptions in alphabetical order:

And now strap in, as there is a ton of great content to read about here!

ABI/Layout handling for the automatic differentiation feature

The std::autodiff module allows computing gradients and derivatives in the calculus sense. It provides two autodiff macros, which can be applied to user-written functions and automatically generate modified versions of those functions, which also compute the requested gradients and derivatives. This functionality is very useful especially in the context of scientific computing and implementation of machine-learning models.

Our autodiff frontend was facing two challenges.

  • First, we would generate a new function through our macro expansion, however, we would not have a suitable function body for it yet. Our autodiff implementation relies on an LLVM plugin to generate the function body. However, this plugin only gets called towards the end of the compilation pipeline. Earlier optimization passes, either on the LLVM or the Rust side, could look at the placeholder body and either "optimize" or even delete the function since it has no clear purpose yet.
  • Second, the flexibility of our macros was causing issues, since it allows requesting derivative computations on a per-argument basis. However, when we start to lower Rust arguments to our compiler backends like LLVM, we do not always have a 1:1 match of Rust arguments to LLVM arguments. As a simple example, an array with two double values might be passed as two individual double values on LLVM level, whereas an array with three doubles might be passed via a pointer.

Marcelo helped rewrite our autodiff macros to not generate hacky placeholder function bodies, but instead introduced a proper autodiff intrinsic. This is the proper way for us to declare that an implementation of this function is not available yet and will be provided later in the compilation pipeline. As a consequence, our generated functions were not deleted or incorrectly optimized anymore. The intrinsic PR also allowed removing some previous hacks and therefore reduced the total lines of code in the Rust compiler by over 500! You can find more details in this PR .

Beyond autodiff work, Marcelo also initiated work on GPU offloading intrinsics, and helped with multiple bugs in our argument handling. We would like to thank Marcelo for all his great work!

Add safety contracts

The Rust Project has an ambitious goal to instrument the Rust standard library with safety contracts , moving from informal comments that specify safety requirements of unsafe functions to executable Rust code. This transformation represents a significant step toward making Rust's safety guarantees more explicit and verifiable. To prioritize which functions should receive contracts first, there is a verification contest ongoing.

Given that Rust contracts are still in their early stages , Dawid's project was intentionally open-ended in scope and direction. This flexibility allowed Dawid to identify and tackle several key areas that would add substantial value to the contracts ecosystem. His contributions were in the following three main areas:

  • Pragmatic Contracts Integration : Refactoring contract HIR lowering to ensure no contract code is executed when contract-checks are disabled. This has major impact as it ensures that contracts do not have runtime cost when contract checks are disabled.

  • Variable Reference Capability : Adding the ability to refer to variables from preconditions within postconditions. This fundamental enhancement to the contracts system has been fully implemented and merged into the compiler. This feature provides developers with much more expressive power when writing contracts, allowing them to establish relationships between input and output states.

  • Separation Logic Integration : The bulk of Dawid's project involved identifying, understanding, and planning the introduction of owned and block ownership predicates for separation-logic style reasoning in contracts for unsafe Rust code. This work required extensive research and collaboration with experts in the field. Dawid engaged in multiple discussions with authors of Rust validation tools and Miri developers, both in person and through Zulip discussion threads. The culmination of this research is captured in a comprehensive MCP (Major Change Proposal) that Dawid created .

Dawid's work represents crucial foundational progress for Rust's safety contracts initiative. By successfully implementing variable reference capabilities and laying the groundwork for separation logic integration, he has positioned the contracts feature for significant future development. His research and design work will undoubtedly influence the direction of this important safety feature as it continues to mature. Thank you very much!

Bootstrap of rustc with rustc_codegen_gcc

The goal of this project was to improve the Rust GCC codegen backend ( rustc_codegen_gcc ), so that it would be able to compile the "stage 2" 1 Rust compiler ( rustc ) itself again .

You might remember that Michał already participated in GSoC last year , where he was working on his own .NET Rust codegen backend, and he did an incredible amount of work. This year, his progress was somehow even faster. Even before the official GSoC implementation period started (!), he essentially completed his original project goal and managed to build rustc with GCC. This was no small feat, as he had to investigate and fix several miscompilations that occurred when functions marked with #[inline(always)] were called recursively or when the compiled program was trying to work with 128-bit integers. You can read more about this initial work at his blog .

After that, he immediately started working on stretch goals of his project. The first one was to get a "stage-3" rustc build working, for which he had to vastly improve the memory consumption of the codegen backend.

Once that was done, he moved on to yet another goal, which was to build rustc for a platform not supported by LLVM. He made progress on this for Dec Alpha and m68k . He also attempted to compile rustc on Aarch64, which led to him finding an ABI bug. Ultimately, he managed to build a rustc for m68k (with a few workarounds that we will need to fix in the future). That is a very nice first step to porting Rust to new platforms unsupported by LLVM, and is important for initiatives such as Rust for Linux .

Michał had to spend a lot of time starting into assembly code and investigating arcane ABI problems. In order to make this easier for everyone, he implemented support for fuzzing and automatically checking ABI mismatches in the GCC codegen backend. You can read more about his testing and fuzzing efforts here .

We were really impressed with what Michał was able to achieve, and we really appreciated working with him this summer. Thank you for all your work, Michał!

Cargo: Build script delegation

Cargo build scripts come at a compile-time cost, because even to run cargo check , they must be built as if you ran cargo build , so that they can be executed during compilation. Even though we try to identify ways to reduce the need to write build scripts in the first place, that may not always be doable. However, if we could shift build scripts from being defined in every package that needs them, into a few core build script packages, we could both reduce the compile-time overhead, and also improve their auditability and transparency. You can find more information about this idea here .

The first step required to delegate build scripts to packages is to be able to run multiple build scripts per crate, so that is what Naman was primarily working on. He introduced a new unstable multiple-build-scripts feature to Cargo, implemented support for parsing an array of build scripts in Cargo.toml , and extended Cargo so that it can now execute multiple build scripts while building a single crate. He also added a set of tests to ensure that this feature will work as we expect it to.

Then he worked on ensuring that the execution of builds scripts is performed in a deterministic order, and that crates can access the output of each build script separately. For example, if you have the following configuration:

[package]
build = ["windows-manifest.rs", "release-info.rs"]

then the corresponding crate is able to access the OUT_DIR s of both build scripts using env!("windows-manifest_OUT_DIR") and env!("release-info_OUTDIR") .

As future work, we would like to implement the ability to pass parameters to build scripts through metadata specified in Cargo.toml and then implement the actual build script delegation to external build scripts using artifact-dependencies .

We would like to thank Naman for helping improving Cargo and laying the groundwork for a feature that could have compile-time benefits across the Rust ecosystem!

Distributed and resource-efficient verification

The goal of this project was to address critical scalability challenges of formally verifying Rust's standard library by developing a distributed verification system that intelligently manages computational resources and minimizes redundant work. The Rust standard library verification project faces significant computational overhead when verifying large codebases, as traditional approaches re-verify unchanged code components. With Rust's standard library containing thousands of functions and continuous development cycles, this inefficiency becomes a major bottleneck for practical formal verification adoption.

Jiping implemented a distributed verification system with several key innovations:

  • Intelligent Change Detection : The system uses hash-based analysis to identify which parts of the codebase have actually changed, allowing verification to focus only on modified components and their dependencies.
  • Multi-Tool Orchestration : The project coordinates multiple verification backends including Kani model checker, with careful version pinning and compatibility management.
  • Distributed Architecture : The verification workload is distributed across multiple compute nodes, with intelligent scheduling that considers both computational requirements and dependency graphs.
  • Real-time Visualization : Jiping built a comprehensive web interface that provides live verification status, interactive charts, and detailed proof results. You can check it out here !

You can find the created distributed verification tool in this repository. Jiping's work established a foundation for scalable formal verification that can adapt to the growing complexity of Rust's ecosystem, while maintaining verification quality and completeness, which will go a long way towards ensuring that Rust's standard library remains safe and sound. Thank you for your great work!

Enable Witness Generation in cargo-semver-checks

cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. Talyn's project aimed to lay the groundwork for it to tackle our most vexing limitation: the inability to catch SemVer breakage due to type changes.

Imagine a crate makes the following change to its public API:

// baseline version
pub fn example(value: i64) {}

// new version
pub fn example(value: String) {}

This is clearly a major breaking change, right? And yet cargo-semver-checks with its hundreds of lints is still unable to flag this. While this case seems trivial, it's just the tip of an enormous iceberg. Instead of changing i64 to String , what if the change was from i64 to impl Into<i64> , or worse, into some monstrosity like:

pub fn example<T, U, const N: usize>(
    value: impl for<'a> First<'a, T> + Second<U, N> + Sync
) {}

Figuring out whether this change is breaking requires checking whether the original i64 parameter type can "fit" into that monstrosity of an impl Trait type. But reimplementing a Rust type checker and trait solver inside cargo-semver-checks is out of the question! Instead, we turn to a technique created for a previous study of SemVer breakage on crates.io —we generate a "witness" program that will fail to compile if, and only if, there's a breaking change between the two versions.

The witness program is a separate crate that can be made to depend on either the old or the new version of the crate being scanned. If our example function comes from a crate called upstream , its witness program would look something like:

// take the same parameter type as the baseline version
fn witness(value: i64) {
    upstream::example(value);
}

This example is cherry-picked to be easy to understand. Witness programs are rarely this straightforward!

Attempting to cargo check the witness while plugging in the new version of upstream forces the Rust compiler to decide whether i64 matches the new impl Trait parameter. If cargo check passes without errors, there's no breaking change here. But if there's a compilation error, then this is concrete, incontrovertible evidence of breakage!

Over the past 22+ weeks, Talyn worked tirelessly to move this from an idea to a working proof of concept. For every problem we foresaw needing to solve, ten more emerged along the way. Talyn did a lot of design work to figure out an approach that would be able to deal with crates coming from various sources (crates.io, a path on disk, a git revision), would support multiple rustdoc JSON formats for all the hundreds of existing lints, and do so in a fashion that doesn't get in the way of adding hundreds more lints in the future.

Even the above list of daunting challenges fails to do justice to the complexity of this project. Talyn created a witness generation prototype that lays the groundwork for robust checking of type-related SemVer breakages in the future. The success of this work is key to the cargo-semver-checks roadmap for 2026 and beyond. We would like to thank Talyn for their work, and we hope to continue working with them on improving witness generation in the future.

Extend behavioural testing of std::arch intrinsics

The std::arch module contains target-specific intrinsics (low-level functions that typically correspond to single machine instructions) which are intended to be used by other libraries. These are intended to match the equivalent intrinsics available as vendor-specific extensions in C.

The intrinsics are tested with three approaches. We test that:

  • The signatures of the intrinsics match the one specified by the architecture.
  • The intrinsics generate the correct instruction.
  • The intrinsics have the correct runtime behavior.

These behavior tests are implemented in the intrinsics-test crate. Initially, this test framework only covered the AArch64 and AArch32 targets, where it was very useful in finding bugs in the implementation of the intrinsics. Madhav's project was about refactoring and improving this framework to make it easier (or really, possible) to extend it to other CPU architectures.

First, Madhav split the codebase into a module with shared (architecturally independent) code and a module with ARM-specific logic. Then he implemented support for testing intrinsics for the x86 architecture, which is Rust's most widely used target. In doing so, he allowed us to discover real bugs in the implementation of some intrinsics, which is a great result! Madhav also did a lot of work in optimizing how the test suite is compiled and executed, to reduce CI time needed to run tests, and he laid the groundwork for supporting even more architectures, specifically LoongArch and WebAssembly.

We would like to thank Madhav for all his work on helping us make sure that Rust intrinsics are safe and correct!

Implement merge functionality in bors

The main Rust repository uses a pull request merge queue bot that we call bors . Its current Python implementation has a lot of issues and was difficult to maintain. The goal of this GSoC project was thus to implement the primary merge queue functionality in our Rust rewrite of this bot.

Sakibul first examined the original Python codebase to figure out what it was doing, and then he implemented several bot commands that allow contributors to approve PRs, set their priority, delegate approval rights, temporarily close the merge tree, and many others. He also implemented an asynchronous background process that checks whether a given pull request is mergeable or not (this process is relatively involved, due to how GitHub works), which required implementing a specialized synchronized queue for deduplicating mergeability check requests to avoid overloading the GitHub API. Furthermore, Sakibul also reimplemented (a nicer version of) the merge queue status webpage that can be used to track which pull requests are currently being tested on CI, which ones are approved, etc.

After the groundwork was prepared, Sakibul could work on the merge queue itself, which required him to think about many tricky race conditions and edge cases to ensure that bors doesn't e.g. merge the wrong PR into the default branch or merge a PR multiple times. He covered these edge cases with many integration tests, to give us more confidence that the merge queue will work as we expect it to, and also prepared a script for creating simulated PRs on a test GitHub repository so that we can test bors "in the wild". And so far, it seems to be working very well!

After we finish the final piece of the merge logic (creating so-called "rollups" ) together with Sakibul, we will start using bors fully in the main Rust repository. Sakibul's work will thus be used to merge all rust-lang/rust pull requests. Exciting!

Apart from working on the merge queue, Sakibul made many other awesome contributions to the codebase, like refactoring the test suite or analyzing performance of SQL queries. In total, Sakibul sent around fifty pull requests that were already merged into bors! What can we say, other than: Awesome work Sakibul, thank you!

Improve bootstrap

bootstrap is the build system of Rust itself, which is responsible for building the compiler, standard library, and pretty much everything else that you can download through rustup . This project's goal was very open-ended: "improve bootstrap".

And Shourya did just that! He made meaningful contributions to several parts of bootstrap. First, he added much-needed documentation to several core bootstrap data structures and modules, which were quite opaque and hard to understand without any docs. Then he moved to improving command execution, as each bootstrap invocation invokes hundreds of external binaries, and it was difficult to track them. Shourya finished a long-standing refactoring that routes almost all executed commands through a single place. This allowed him to also implement command caching and also command profiling, which shows us which commands are the slowest.

After that, Shourya moved on to refactoring config parsing. This was no easy task, because bootstrap has A LOT of config options; the single function that parses them had over a thousand lines of code (!). A set of complicated config precedence rules was frequently causing bugs when we had to modify that function. It took him several weeks to untangle this mess, but the result is worth it. The refactored function is much less brittle and easier to understand and modify, which is great for future maintenance.

The final area that Shourya improved were bootstrap tests. He made it possible to run them using bare cargo , which enables debugging them e.g. in an IDE, which is very useful, and mainly he found a way to run the tests in parallel, which makes contributing to bootstrap itself much more pleasant, as it reduced the time to execute the tests from a minute to under ten seconds. These changes required refactoring many bootstrap tests that were using global state, which was not compatible with parallel execution.

Overall, Shourya made more than 30 PRs to bootstrap since April! We are very thankful for all his contributions, as they made bootstrap much easier to maintain. Thank you!

Improve Wild linker test suites

Wild is a very fast linker for Linux that’s written in Rust. It can be used to build executables and shared objects.

Kei’s project was to leverage the test suite of one of the other Linux linkers to help test the Wild linker. This goal was accomplished. Thanks to Kei’s efforts, we now run the Mold test suite against Wild in our CI. This has helped to prevent regressions on at least a couple of occasions and has also helped to show places where Wild has room for improvement.

In addition to this core work, Kei also undertook numerous other changes to Wild during GSoC. Of particular note was the reworking of argument parsing to support --help , which we had wanted for some time. Kei also fixed a number of bugs and implemented various previously missing features. This work has helped to expand the range of projects that can use Wild to build executables.

Kei has continued to contribute to Wild even after the GSoC project finished and has now contributed over seventy PRs . We thank Kei for all the hard work and look forward to continued collaboration in the future!

Improving the Rustc Parallel Frontend: Parallel Macro Expansion

The Rust compiler has a (currently unstable) parallel compilation mode in which some compiler passes run in parallel. One major part of the compiler that is not yet affected by parallelization is name resolution. It has several components, but those selected for this GSoC project were import resolution and macro expansion (which are in fact intermingled into a single fixed-point algorithm). Besides the parallelization itself, another important point of the work was improving the correctness of import resolution by eliminating accidental order dependencies in it, as those also prevent parallelization.

We should note that this was a very ambitious project, and we knew from the beginning that it would likely be quite challenging to reach the end goal within the span of just a few months. And indeed, Lorrens did in fact run into several unexpected issues that showed us that the complexity of this work is well beyond a single GSoC project, so he didn't actually get to parallelizing the macro expansion algorithm. Nevertheless, he did a lot of important work to improve the name resolver and prepare it for being parallelized.

The first thing that Lorrens had to do was actually understand how Rust name resolution works and how it is implemented in the compiler. That is, to put it mildly, a very complex piece of logic, and is affected by legacy burden in the form of backward compatibility lints, outdated naming conventions, and other technical debt. Even this learned knowledge itself is incredibly useful, as the set of people that understand Rust's name resolution today is very low, so it is important to grow it.

Using this knowledge, he made a lot of refactorings to separate significant mutability in name resolver data structures from "cache-like" mutability used for things like lazily loading otherwise immutable data from extern crates, which was needed to unblock parallelization work. He split various parts of the name resolver, got rid of unnecessary mutability and performed a bunch of other refactorings . He also had to come up with a very tricky data structure that allows providing conditional mutable access to some data.

These refactorings allowed him to implement something called "batched import resolution" , which splits unresolved imports in the crate into "batches", where all imports in a single batch can be resolved independently and potentially in parallel, which is crucial for parallelizing name resolution. We have to resolve a few remaining language compatibility issues , after which the batched import resolution work will hopefully be merged.

Lorrens laid an important groundwork for fixing potential correctness issues around name resolution and macro expansion, which unblocks further work on parallelizing these compiler passes, which is exciting. His work also helped unblock some library improvements that were stuck for a long time. We are grateful for your hard work on improving tricky parts of Rust and its compiler, Lorrens. Thank you!

Make cargo-semver-checks faster

cargo-semver-checks is a Cargo subcommand for finding SemVer API breakages in Rust crates. It is adding SemVer lints at an exponential pace: the number of lints has been doubling every year, and currently stands at 229 . More lints mean more work for cargo-semver-checks to do, as well as more work for its test suite which runs over 250000 lint checks!

Joseph's contributions took three forms:

  • Improving cargo-semver-checks runtime performance—on large crates, our query runtime went from ~8s to ~2s, a 4x improvement!
  • Improving the test suite's performance, enabling us to iterate faster. Our test suite used to take ~7min and now finishes in ~1min, a 7x improvement!
  • Improving our ability to profile query performance and inspect performance anomalies, both of which were proving a bottleneck for our ability to ship further improvements.

Joseph described all the clever optimization tricks leading to these results in his final report . To encourage you to check out the post, we'll highlight a particularly elegant optimization described there.

cargo-semver-checks relies on rustdoc JSON, an unstable component of Rust whose output format often has breaking changes. Since each release of cargo-semver-checks supports a range of Rust versions, it must also support a range of rustdoc JSON formats. Fortunately, each file carries a version number that tells us which version's serde types to use to deserialize the data.

Previously, we used to deserialize the JSON file twice: once with a serde type that only loaded the format_version: u32 field, and a second time with the appropriate serde type that matches the format. This works fine, but many large crates generate rustdoc JSON files that are 500 MiB+ in size, requiring us to walk all that data twice. While serde is quite fast, there's nothing as fast as not doing the work twice in the first place!

So we used a trick: optimistically check if the format_version field is the last field in the JSON file, which happens to be the case every time (even though it is not guaranteed). Rather than parsing JSON, we merely look for a , character in the last few dozen bytes, then look for : after the , character, and for format_version between them. If this is successful, we've discovered the version number while avoiding going through hundreds of MB of data! If we failed for any reason, we just fall back to the original approach having only wasted the effort of looking at 20ish extra bytes.

Joseph did a lot of profiling and performance optimizations to make cargo-semver-checks faster for everyone, with awesome results. Thank you very much for your work!

Make Rustup Concurrent

As a very important part of the Rustup team's vision of migrating the rustup codebase to using async IO since the introduction of the global tokio runtime in #3367 , this project's goal was to introduce proper concurrency to rustup. Francisco did that by attacking two aspects of the codebase at once:

  1. He created a new set of user interfaces for displaying concurrent progress.
  2. He implemented a new toolchain update checking & installation flow that is idiomatically concurrent.

As a warmup, Francisco made rustup check concurrent, resulting in a rather easy 3x performance boost in certain cases. Along the way, he also introduced a new indicatif -based progress bar for reporting progress of concurrent operations, which replaced the original hand-rolled solution.

After that, the focus of the project has moved on to the toolchain installation flow used in commands like rustup toolchain install and rustup update . In this part, Francisco developed two main improvements:

  1. The possibility of downloading multiple components at once when setting up a toolchain, controlled by the RUSTUP_CONCURRENT_DOWNLOADS environment variable. Setting this variable to a value greater than 1 is particularly useful in certain internet environments where the speed of a single download connection could be restricted by QoS (Quality of Service) limits.
  2. The ability to interleave component network downloads and disk unpacking. For the moment, unpacking will still happen sequentially, but disk and net I/O can finally be overlapped! This introduces a net gain in toolchain installation time, as only the last component being downloaded will have noticeable unpacking delays. In our tests, this typically results in a reduction of 4-6 seconds (on fast connections, that's ~33% faster!) when setting up a toolchain with the default profile.

We have to say that these results are very impressive! While a few seconds shorter toolchain installation might not look so important at a first glance, rustup is ubiquitously used to install Rust toolchains on CI of tens of thousands of Rust projects, so this improvement (and also further improvements that it unlocks) will have an enormous effect across the Rust ecosystem. Many thanks to Francisco Gouveia's enthusiasm and active participation, without which this wouldn't have worked out!

Mapping the Maze of Rust's UI Test Suite with Established Continuous Integration Practices

The snapshot-based UI test suite is a crucial part of the Rust compiler's test suite. It contains a lot of tests: over 19000 at the time of writing. The organization of this test suite is thus very important, for at least two reasons:

  1. We want to be able to find specific tests, identify related tests, and have some sort of logical grouping of related tests.
  2. We have to ensure that no directory contains so many entries such that GitHub gives up rendering the directory.

Furthermore, having informative test names and having some context for each test is particularly important, as otherwise contributors would have to reverse-engineer test intent from git blame and friends.

Over the years, we have accumulated a lot of unorganized stray test files in the top level tests/ui directory, and have a lot of generically named issue-*.rs tests in the tests/ui/issues/ directory. The former makes it annoying to find more meaningful subdirectories, while the latter makes it completely non-obvious what each test is about.

Julien's project was about introducing some order into the chaos. And that was indeed achieved! Through Julien's efforts (in conjunction with efforts from other contributors), we now have:

  • No more stray tests under the immediate tests/ui/ top-level directory, and are organized into more meaningful subdirectories. We were able to then introduce a style check to prevent new stray tests from being added.
  • A top-level document contains TL;DRs for each of the immediate subdirectories.
  • Substantially fewer generically-named issue-*.rs under tests/ui/issues/ .

Test organization (and more generally, test suite ergonomics) is an often under- appreciated aspect of maintaining complex codebases. Julien spent a lot of effort improving test ergonomics of the Rust compiler, both in last year's GSoC (where he vastly improved our "run-make" test suite), and then again this year, where he made our UI test suite more ergonomic. We would like to appreciate your meticulous work, Julien! Thank you very much.

Modernising the libc Crate

libc is a crucial crate in the Rust ecosystem (on average, it has ~1.5 million daily downloads), providing bindings to system C API. This GSoC project had two goals: improve testing for what we currently have, and make progress toward a stable 1.0 release of libc .

Test generation is handled by the ctest crate, which creates unit tests that compare properties of Rust API to properties of the C interfaces it binds. Prior to the project, ctest used an obsolete Rust parser that had stopped receiving major updates about eight years ago, meaning libc could not easily use any syntax newer than that. Abdul completely rewrote ctest to use syn as its parser and make it much easier to add new tests, then went through and switched everything over to the more modern ctest . After this change, we were able to remove a number of hacks that had been needed to work with the old parser.

The other part of the project was to make progress toward the 1.0 release of libc . Abdul helped with this by going through and addressing a number of issues that need to be resolved before the release, many of which were made possible with all the ctest changes.

While there is still a lot of work left to do before libc can reach 1.0, Abdul's improvements will go a long way towards making that work easier, as they give us more confidence in the test suite, which is now much easier to modify and extend. Thank you very much for all your work!

Prepare stable_mir crate for publishing

This project's goal was to prepare the Rust compiler's stable_mir crate (eventually renamed to rustc_public ), which provides a way to interface with the Rust compiler for analyzing Rust code, for publication on crates.io. While the existing crate provided easier APIs for tool developers, it lacked proper versioning and was tightly coupled with compiler versions. The goal was to enable independent publication with semantic versioning.

The main technical work involved restructuring rustc_public and rustc_public_bridge (previously named rustc_smir ) by inverting their dependency relationship. Makai resolved circular dependencies by temporarily merging the crates and gradually separating them with the new architecture. They also split the existing compiler interface to separate public APIs from internal compiler details.

Furthermore, Makai established infrastructure for dual maintenance: keeping an internal version in the Rust repository to track compiler changes while developing the publishable version in a dedicated repository. Makai automated a system to coordinate between versions, and developed custom tooling to validate compiler version compatibility and to run tests.

Makai successfully completed the core refactoring and infrastructure setup, making it possible to publish rustc_public independently with proper versioning support for the Rust tooling ecosystem! As a bonus, Makai contributed several bug fixes and implemented new APIs that had been requested by the community. Great job Makai!

Prototype an alternative architecture for cargo fix using cargo check

The cargo fix command applies fixes suggested by lints, which makes it useful for cleaning up sloppy code, reducing the annoyance of toolchain upgrades when lints change and helping with edition migrations and new lint adoption. However, it has a number of issues. It can be slow , it only applies a subset of possible lints, and doesn't provide an easy way to select which lints to fix.

These problems are caused by its current architecture; it is implemented as a variant of cargo check that replaces rustc with cargo being run in a special mode that will call rustc in a loop, applying fixes until there are none. While this special rustc -proxy mode is running, a cross-process lock is held to force only one build target to be fixed at a time to avoid race conditions. This ensures correctness at the cost of performance and difficulty in making the rustc -proxy interactive.

Glen implemented a proof of concept of an alternative design called cargo-fixit . cargo fixit spawns cargo check in a loop, determining which build targets are safe to fix in a given pass, and then applying the suggestions. This puts the top-level program in charge of what fixes get applied, making it easier to coordinate. It also allows the locking to be removed and opens the door to an interactive mode.

Glen performed various benchmarks to test how the new approach performs. And in some benchmarks, cargo fixit was able to finish within a few hundred milliseconds, where before the same task took cargo fix almost a minute! As always, there are trade-offs; the new approach comes at the cost that fixes in packages lower in the dependency tree can cause later packages to be rebuilt multiple times, slowing things down, so there were also benchmarks where the old design was a bit faster. The initial results are still very promising and impressive!

Further work remains to be done on cargo-fixit to investigate how it could be optimized better and how should its interface look like before being stabilized. We thank Glen for all the hard work on this project, and we hope that one day the new design will become used by default in Cargo, to bring faster and more flexible fixing of lint suggestions to everyone!

Prototype Cargo Plumbing Commands

The goal of this project was to move forward our Project Goal for creating low-level ("plumbing") Cargo subcommands to make it easier to reuse parts of Cargo by other tools.

Vito created a prototype of several plumbing commands in the cargo-plumbing crate. The idea was to better understand how the plumbing commands should look like, and what is needed from Cargo to implement them. Vito had to make compromises in some of these commands to not be blocked on making changes to the current Cargo Rust APIs, and he helpfully documented those blockers . For example, instead of solely relying on the manifests that the user passed in, the plumbing commands will re-read the manifests within each command, preventing callers from being able to edit them to get specific behavior out of Cargo, e.g. dropping all workspace members to allow resolving dependencies on a per-package basis.

Vito did a lot of work, as he implemented seven different plumbing subcommands:

  • locate-manifest
  • read-manifest
  • read-lockfile
  • lock-dependencies
  • write-lockfile
  • resolve-features
  • plan-build

As future work, we would like to deal with some unresolved questions around how to integrate these plumbing commands within Cargo itself, and extend the set of plumbing commands.

We thank Vito for all his work on improving the flexibility of Cargo.

Conclusion

We would like to thank all contributors that have participated in Google Summer of Code 2025 with us! It was a blast, and we cannot wait to see which projects GSoC contributors will come up with in the next year. We would also like to thank Google for organizing the Google Summer of Code program and for allowing us to have so many projects this year. And last, but not least, we would like to thank all the Rust mentors who were tirelessly helping our contributors to complete their projects. Without you, Rust GSoC would not be possible.

  1. You can read about what do those individual compiler stages mean e.g. here .

Show HN: Optimizing LiteLLM with Rust – When Expectations Meet Reality

Hacker News
github.com
2025-11-18 16:32:16
Comments...
Original Article

Fast LiteLLM

PyPI License: MIT Python Versions

High-performance Rust acceleration for LiteLLM - providing 2-20x performance improvements for token counting, routing, rate limiting, and connection management.

Why Fast LiteLLM?

Fast LiteLLM is a drop-in Rust acceleration layer for LiteLLM that provides significant performance improvements:

  • 5-20x faster token counting with batch processing
  • 3-8x faster request routing with lock-free data structures
  • 4-12x faster rate limiting with async support
  • 2-5x faster connection management

Built with PyO3 and Rust, it seamlessly integrates with existing LiteLLM code with zero configuration required.

Installation

Quick Start

import fast_litellm  # Automatically accelerates LiteLLM
import litellm

# All LiteLLM operations now use Rust acceleration where available
response = litellm.completion(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello!"}]
)

That's it! Just import fast_litellm before litellm and acceleration is automatically applied.

Architecture

The acceleration uses PyO3 to create Python extensions from Rust code:

┌─────────────────────────────────────────────────────────────┐
│ LiteLLM Python Package                                      │
├─────────────────────────────────────────────────────────────┤
│ fast_litellm (Python Integration Layer)                    │
│ ├── Enhanced Monkeypatching                                │
│ ├── Feature Flags & Gradual Rollout                        │
│ ├── Performance Monitoring                                 │
│ └── Automatic Fallback                                     │
├─────────────────────────────────────────────────────────────┤
│ Rust Acceleration Components (PyO3)                        │
│ ├── core               (Advanced Routing)                   │
│ ├── tokens             (Token Counting)                    │
│ ├── connection_pool    (Connection Management)             │
│ └── rate_limiter       (Rate Limiting)                     │
└─────────────────────────────────────────────────────────────┘

Features

  • Zero Configuration : Works automatically on import
  • Production Safe : Built-in feature flags, monitoring, and automatic fallback to Python
  • Performance Monitoring : Real-time metrics and optimization recommendations
  • Gradual Rollout : Support for canary deployments and percentage-based feature rollout
  • Thread Safe : Lock-free data structures using DashMap for concurrent operations
  • Type Safe : Full Python type hints and type stubs included

Performance Benchmarks

Component Baseline Optimized Use Case
Token Counting 5-10x 15-20x Batch processing, context management
Request Routing 3-5x 6-8x Load balancing, model selection
Rate Limiting 4-8x 10-12x Request throttling, quota management
Connection Pooling 2-3x 4-5x HTTP reuse, latency reduction

Configuration

Fast LiteLLM works out of the box with zero configuration. For advanced use cases, you can configure behavior via environment variables:

# Disable specific features
export FAST_LITELLM_RUST_ROUTING=false

# Gradual rollout (10% of traffic)
export FAST_LITELLM_BATCH_TOKEN_COUNTING=canary:10

# Custom configuration file
export FAST_LITELLM_FEATURE_CONFIG=/path/to/config.json

See the Configuration Guide for all options.

Requirements

  • Python 3.8 or higher
  • LiteLLM

Rust is not required for installation - prebuilt wheels are available for all major platforms.

Development

To contribute or build from source:

Prerequisites:

  • Python 3.8+
  • Rust toolchain (1.70+)
  • maturin for building Python extensions

Setup:

git clone https://github.com/neul-labs/fast-litellm.git
cd fast-litellm

# Install maturin
pip install maturin

# Build and install in development mode
maturin develop

# Run unit tests
pip install pytest pytest-asyncio
pytest tests/

Integration Testing

Fast LiteLLM includes comprehensive integration tests that run LiteLLM's test suite with acceleration enabled:

# Setup LiteLLM for testing
./scripts/setup_litellm.sh

# Run LiteLLM tests with acceleration
./scripts/run_litellm_tests.sh

# Compare performance (with vs without acceleration)
./scripts/compare_performance.py

This ensures Fast LiteLLM doesn't break any LiteLLM functionality. See the Testing Guide for details.

For more information, see our Contributing Guide .

Documentation

How It Works

Fast LiteLLM uses PyO3 to create Python extensions from Rust code:

┌─────────────────────────────────────────────────────────────┐
│ LiteLLM Python Package                                      │
├─────────────────────────────────────────────────────────────┤
│ fast_litellm (Python Integration Layer)                    │
│ ├── Enhanced Monkeypatching                                │
│ ├── Feature Flags & Gradual Rollout                        │
│ ├── Performance Monitoring                                 │
│ └── Automatic Fallback                                     │
├─────────────────────────────────────────────────────────────┤
│ Rust Acceleration Components (PyO3)                        │
│ ├── core               (Advanced Routing)                   │
│ ├── tokens             (Token Counting)                    │
│ ├── connection_pool    (Connection Management)             │
│ └── rate_limiter       (Rate Limiting)                     │
└─────────────────────────────────────────────────────────────┘

When you import fast_litellm , it automatically patches LiteLLM's performance-critical functions with Rust implementations while maintaining full compatibility with the Python API.

Contributing

We welcome contributions! Please see our Contributing Guide .

License

This project is licensed under the MIT License - see the LICENSE file for details.

Links

Solving a Million-Step LLM Task with Zero Errors

Hacker News
arxiv.org
2025-11-18 16:26:28
Comments...
Original Article

View PDF HTML (experimental)

Abstract: LLMs have achieved remarkable breakthroughs in reasoning, insights, and tool use, but chaining these abilities into extended processes at the scale of those routinely executed by humans, organizations, and societies has remained out of reach. The models have a persistent error rate that prevents scale-up: for instance, recent experiments in the Towers of Hanoi benchmark domain showed that the process inevitably becomes derailed after at most a few hundred steps. Thus, although LLM research is often still benchmarked on tasks with relatively few dependent logical steps, there is increasing attention on the ability (or inability) of LLMs to perform long range tasks. This paper describes MAKER, the first system that successfully solves a task with over one million LLM steps with zero errors, and, in principle, scales far beyond this level. The approach relies on an extreme decomposition of a task into subtasks, each of which can be tackled by focused microagents. The high level of modularity resulting from the decomposition allows error correction to be applied at each step through an efficient multi-agent voting scheme. This combination of extreme decomposition and error correction makes scaling possible. Thus, the results suggest that instead of relying on continual improvement of current LLMs, massively decomposed agentic processes (MDAPs) may provide a way to efficiently solve problems at the level of organizations and societies.

Submission history

From: Elliot Meyerson [ view email ]
[v1] Wed, 12 Nov 2025 06:27:55 UTC (7,906 KB)

A Day at Hetzner Online in the Falkenstein Data Center

Hacker News
www.igorslab.de
2025-11-18 16:18:47
Comments...
Original Article

My visit to Hetzner Online in Falkenstein goes far beyond a tour of a single server, as the site reveals an impressive combination of industrial precision, technical depth and logistically sophisticated infrastructure. As soon as you enter the extensive grounds, it becomes clear that this is a data center park that has grown over many years and is now one of the most comprehensive sites of its kind in Germany. The paths between the individual areas are long and winding, the buildings are spread over a large area and even the internal transport routes look more like the access roads of a small industrial area. Anyone planning a similar project should really be prepared to walk several kilometers and prefer sturdy footwear, as even the employees walk considerable distances every day in this environment. And because I know you’d rather watch someone else sweating in the heat, I’ve made a movie of this experience today:

And if you’d like to read about what you see, here’s a brief overview of today’s action:

The highly structured layout of the site is already apparent at the main entrance. After registering, the tour leads directly past the colocation area, where customers can operate their own hardware and access a fully-fledged data center environment. The modular arrangement of the racks, the clearly separated supply routes and the functional design convey an image of precise organization that was to run through the entire visit.

From there, we moved on to the midi data centers, which are designed as compact modules and yet meet all the requirements of a modern data center. This part of the park clearly demonstrates Hetzner’s focus on practicality, as there is nothing superfluous here. Everything is designed for smooth processes, short maintenance times and high energy efficiency. The simplicity on the outside is deceptive, because the infrastructure inside meets the requirements of a large-scale IT operation in every respect.

The route then led me to the areas where the actual in-house production is visible. Hetzner produces a significant proportion of its servers itself and does not rely on exaggerated presentation, but on robust construction and standardized processes. Individual components are first tested, followed by the assembly of the systems with clearly defined work steps before the devices are transferred to extensive test tracks. The test benches for hard disks and SSDs run continuously to ensure both reliability and long-term stability. In addition, there are areas for hardware service, RMA processing and the complete testing of 19-inch systems, so that defective components or components requiring replacement can be processed directly on site.

Another station was the production and processing of racks and open frame systems, which are used for cloud services, object storage and internal production environments. This segment shows particularly clearly that Hetzner deliberately relies on functional design and regional suppliers. The racks are finalized, checked and prepared for continuous use in data center environments in-house. Airflow, stability and ease of assembly are the main focus, as every second and every move counts in mass deployment.

Finally, in the decommissioning area, it becomes clear how carefully the entire life cycle of the hardware is handled. Discarded devices go through a complete documentation process before they are destroyed in a specially certified shredder, which renders both magnetic hard drives and SSDs safe and permanently unusable. The remnants then enter the regulated disposal process, which is also documented in detail.

The technical foundation of the site is particularly noteworthy. The redundant energy supply, the sophisticated network connection and the consistent use of free cooling form a system that enables maximum operating comfort with the lowest possible energy consumption. Thanks to the climatic location in the Vogtland region, a large part of the cooling can be achieved using outside air alone, resulting in low PUE values and a very stable thermal environment. The technical impression is so impressive that you almost forget how much work is behind the realization and ongoing operation.

Visitors quickly realize that a tour of Falkenstein is by no means a short detour. The dimensions, the number of halls and the distribution of the individual production and service areas inevitably mean that several kilometers have to be covered on foot. Good walking shoes are therefore not a bad recommendation. Anyone interested in infrastructure, technology and data center processes will gain an in-depth insight that goes far beyond what you would expect from the outside. Falkenstein is not an anonymous data location, but a highly organized interplay of production, service, operation and security that is rarely seen in this form. If you can get in…

So much for the script, but now just watch the finished cinematic product, because videos don’t hurt and I’ve kept it really short!

And if anyone is still looking for a good deal: Click here! (No affiliate, just a small thank you to the travel guides)

The code and open-source tools I used to produce a science fiction anthology

Hacker News
compellingsciencefiction.com
2025-11-18 16:10:34
Comments...
Original Article

Last month I published Think Weirder: The Year's Best Science Fiction Ideas , a 16-story anthology featuring Greg Egan, Isabel J. Kim, Ray Nayler, Caroline M. Yoachim, and twelve other wonderful authors. The book ended up being the #1 New Release in the Short Stories Anthologies category for a short time on Amazon, outselling many other newly released short story anthologies published by the big NYC publishers with large marketing departments.

I'm not a professional publisher. I have a full-time job and two small kids, so all of this work happened after my kids went to sleep. I had to use my time judiciously, which meant creating an efficient process. Fortunately I'm a programmer, and it turns out that programming skills translate surprisingly well to book publishing. This post is about how I built a complete publishing pipeline using Python, YAML files, and LaTeX — and why you might want to do something similar if you're considering publishing a book. I know that by writing this I'll have my choices questioned by professional designers, but hopefully the software concepts will be helpful.

My initial thought: can I really do ALL of this?

When I started this project, I had some worries. Professional publishers have entire departments of specialists. How could I possibly handle all of that myself?

The answer turned out to be: build tools that automate the repetitive parts, and use simple file formats that make everything transparent and debuggable.

Step 1: Tracking stories with plain text files

The first challenge was tracking hundreds of candidate stories from different magazines. I read 391 stories published in 2024 before selecting the final 16. That's a lot of stories to keep organized.

I could have used a spreadsheet, but I went with plain YAML files instead. Here's why this worked well for me:

  • Git-friendly : Every decision I made was tracked in version control
  • Human-readable : I could open any file in a text editor and understand what I was looking at
  • Easy to build scripts around: I wrote several Python functions to do different kinds of metadata introspection that I'll go through

The structure looks like this:

data/
  story-progress.yaml       # Central tracking file
  markets.yaml              # Magazine metadata
  themes.yaml               # Theme occurrence tracking
  subgenres.yaml            # Subgenre tallies
stories/
  clarkesworld-magazine/
    nelson_11_24.yaml       # Individual story files
    pak_06_24.yaml
  reactor-magazine/
    larson_breathing.yaml
  ...

Each story file is pure YAML containing the full story text plus metadata:

title: "Twenty-Four Hours"
author: H.H. Pak
market: clarkesworld-magazine
url: https://clarkesworldmagazine.com/pak_06_24/
word_count: 4540
year: 2024
slug: pak_06_24
summary: ...

Not all stories have public URLs available, but that's OK because all of the fields are optional. The central story-progress.yaml tracks editorial state:

clarkesworld-magazine-nelson_11_24:
  title: "LuvHome™"
  author: Resa Nelson
  market: clarkesworld-magazine
  status: accepted  # or: not_started/relevant/rejected
  date_added: '2024-09-08T08:22:47.033192'

Step 2: A simple command-line tool

I built a small Python CLI tool ( se.py ) to help me navigate all this data. Since I do all this work at night after my kids go to sleep, I wanted something fast that mirrored a lot of the other work I do on the command line. The tool is simple:

python se.py —help
usage: se.py [-h] {markets,stories,relevant,decide,accepted,compile} ...

Story Evaluator CLI

positional arguments:
  {markets,stories,relevant,decide,accepted,compile}
                        Available commands
    markets             List markets
    stories             Manage stories
    relevant            List URLs for stories marked as relevant
    decide              Make accept/reject decisions on relevant stories
    accepted            Manage accepted stories
    compile             Show anthology compilation statistics

optional arguments:
  -h, —help            show this help message and exit

The compile command ended up being really useful — it gave me instant feedback on anthology size and composition:

ANTHOLOGY COMPILATION STATISTICS
============================================================
Total Stories: 16
Total Word Count: 115,093 words
Average Word Count: 7,193 words
Unique Authors: 16
Markets Represented: 4

STORIES BY MARKET:
  analog-magazine: 2 stories (12.5%)
  asimovs-magazine: 2 stories (12.5%)
  clarkesworld-magazine: 10 stories (62.5%)
  reactor-magazine: 2 stories (12.5%)

This was really helpful during the selection process. I could quickly check how far along I was toward my ~120k word goal, and make sure I hadn't accidentally included multiple stories by the same author.

Step 3: Typesetting the print book

This part surprised me the most. I initially thought I'd have to learn Adobe InDesign or pay someone to do the typesetting. But I decided to use LaTeX instead, since I had some previous experience with it (another publishing friend sent me some of his example files, and I had some academic experience). The process worked out better than expected.

I used XeLaTeX with the memoir document class. Here's what I liked about this approach:

  • Reproducible : I can rebuild the entire book from source in a few seconds, and I can use the same templates next year
  • Professional typography : LaTeX handles ligatures, kerning, and line breaking better than I could manually
  • Custom fonts : I used Crimson Pro for body text and Rajdhani for titles
  • Again, version control that I'm used to : The entire book is just text files in Git

The main parts of the master file for the book are really simple:

\documentclass[final,11pt,twoside]{memoir}
\usepackage{compelling}

\begin{document}
\begin{frontmatter}
  \include{title}
  \tableofcontents
\end{frontmatter}

\begin{mainmatter}
  \include{introduction}
  \include{death-and-the-gorgon}
  \include{the-best-version-of-yourself}
  % ... 14 more stories
  \include{acknowledgements}
\end{mainmatter}
\end{document}

All the formatting rules live in compelling.sty , a custom style package. Here's a link to the full, messy file . Some highlights:

% 6x9 inch trade paperback size
\setstocksize{9in}{6in}
\settrimmedsize{9in}{6in}{*}

% Margins
\setlrmarginsandblock{1.00in}{0.75in}{*}
\setulmarginsandblock{0.75in}{0.75in}{*}

% Typography nerding
\usepackage[final,protrusion=true,factor=1125,
            stretch=70,shrink=70]{microtype}

% Custom fonts loaded from local files
\setromanfont[
  Ligatures=TeX,
  Path=./Crimson_Pro/static/,
  UprightFont=CrimsonPro-Regular,
  BoldFont=CrimsonPro-Bold,
  ItalicFont=CrimsonPro-Italic,
  BoldItalicFont=CrimsonPro-BoldItalic
]{Crimson Pro}


\setsansfont[
  Path=./Rajdhani/,
  UprightFont=Rajdhani-Bold,
  BoldFont=Rajdhani-Bold,
  ItalicFont=Rajdhani-Bold,
  BoldItalicFont=Rajdhani-Bold
]{Rajdhani}

% Chinese font family for CJK characters
\newfontfamily\chinesefont{PingFang SC}

The microtype package does a lot of subtle work with character spacing and line breaking that makes the text look professionally typeset.

I wanted story titles in bold sans-serif with author names underneath in a lighter gray. Here's how I set that up:

\renewcommand{\chapter}[2]{
    \pagestyle{DefaultStyle}
    \stdchapter*{
        \sffamily
        \LARGE 
        \textbf{\MakeUppercase{#1}}
        \\ 
        \large 
        \color{dark-gray} 
        {\MakeUppercase{#2}}
    }
    \addcontentsline{toc}{chapter}{
        \protect\parbox[t]{\dimexpr\textwidth-3em}{
            \sffamily#1
            \\ 
            \protect\small
            \protect\color{gray}
            \protect\textit{#2}
        }
    }
    \def\leftmark{#1}
    \def\rightmark{#2}
}

This redefines the chapter command to take two arguments, the title and byline, and sets up both the chapter formatting, TOC formatting, and makes sure that the title and byline are printed in the headers on alternating pages.

Now every story file just says:

\chapter{Death and the Gorgon}{by Greg Egan}
[story content]

Most authors send me stories as HTML, PDF, or word, so I needed a way to convert them to LaTeX. I wrote a simple Python script to do this, which saved me a huge amount of manual formatting work.

Step 4: Creating the ebook

Print was one thing, but I also needed an ebook. This turned out to be easier than I expected because I could reuse all the LaTeX source I'd already created.

I used Pandoc to convert from LaTeX to EPUB:

# Convert LaTeX to EPUB
pandoc 2025.tex -o Think_Weirder_2025.epub \
  —toc \
  —epub-cover-image=cover_optimized.jpg \
  —css=epub-style.css \
  —metadata title="Think Weirder" \
  —metadata author="Edited by Joe Stech"

Pandoc's default table of contents only showed story titles. But I wanted author names too, like you see in print anthologies. EPUBs are just zipped collections of XHTML files, so I wrote a small post-processing script:

def modify_toc(nav_content, authors):
    """Add author bylines to TOC entries."""
    pattern = r'<a href="([^"]+)">([^<]+)</a>'

    def add_author(match):
        href, title = match.group(1), match.group(2)
        chapter_id = extract_id_from_href(href)

        if chapter_id in authors:
            author = authors[chapter_id]
            return f'<a href="{href}">{title}<br />\n' \
                   f'<em>{author}</em></a>'
        return match.group(0)

    return re.sub(pattern, add_author, nav_content)

The script unzips the EPUB, finds the navigation file, adds author bylines, and rezips everything. Now the ebook table of contents matches the print version.

What I learned

The whole process took longer than I expected — many months of night work. The simple software I wrote really made it a feasible one-person project though, and motivates me to go through the whole process again next year.

Staying organized is crucial. When hundreds of stories are involved, it's easy to forget details, so using se.py to save metadata in the moment that could be sliced and diced later was so important.

Reproducible builds were a lifesaver. I made changes to the book layout right up until the week before publication. Because I could rebuild the entire book in seconds, and everything was backed up in git, I could experiment freely without worrying about breaking things.

Simple file formats made me comfortable. When something went wrong, I could always open a YAML file or look at the LaTeX source and understand what was happening. I never hit a point where the tools were a black box.

I didn't need to understand everything up front. I learned LaTeX details as I went (arguably I still don't really understand LaTeX). Same with Pandoc. I got something basic working first, then incrementally improved it.

Can you do this too?

If you're thinking about publishing a book — whether it's an anthology, a novel, or a collection of technical writing — I think this approach is worth considering. There's something motivating about having a detailed understanding of every step in the production process. If you have questions feel free to reach out, I love talking about this hobby! You can email me at joe@thinkweirder.com.

And if you enjoy concept-driven science fiction that is heavy on novel ideas, check out Think Weirder!

5 Things to Try with Gemini 3 Pro in Gemini CLI

Hacker News
developers.googleblog.com
2025-11-18 16:09:31
Comments...
Original Article

Gemini 3 Pro is now available in Gemini CLI

We've integrated Gemini 3 Pro , our most intelligent model, directly into Gemini CLI to unlock a new level of performance and productivity in the terminal. This powerful combination delivers state-of-the-art reasoning for executing better commands, enhances support for complex engineering work through agentic coding, and enables smarter, more tailored workflows via advanced tool use.

We are rolling out access gradually to ensure the experience remains fast and reliable.

  • Gemini 3 Pro is available starting now in Gemini CLI for Google AI Ultra subscribers and for those who have access via paid Gemini API key.
  • For Gemini Code Assist Enterprise users, access is coming soon.
  • All other users, including Google AI Pro, Gemini Code Assist Standard, and free tier users can join the waitlist here to get access as it becomes available .

You can also track our rollout progress by following this GitHub discussion .

Start using Gemini 3 Pro with Gemini CLI

If you’re a Google AI Ultra subscriber or have a paid Gemini API key, get started immediately by upgrading your Gemini CLI version to 0.16.x :

npm install -g @google/gemini-cli@latest

Shell

Copied

After you’ve confirmed your version, run /settings , then toggle Preview features to true . Gemini CLI will now default to Gemini 3 Pro.

Users can follow the instructions in this video to learn how to enable Gemini 3 Pro in Gemini CLI

Here are 5 practical ways you can tap into Gemini 3 Pro in Gemini CLI to accelerate development and bring your biggest ideas to life.

Build anything in the terminal with improved agentic coding

Gemini 3 Pro excels at coding because of its ability to synthesize disparate pieces of information, including text, images, and code, and follow complex, creative instructions. It understands the intent behind your idea, allowing you to go from a rough concept to a functional starting point in a single step.

  1. Generate a ready-to-deploy app with 3D graphics

Gemini 3 Pro's agentic coding capability allows it to handle prompts that are both a creative brief and a technical spec at the same time. It can take a prompt, create a detailed execution plan, and then generate the entire scaffold for a runnable web project, not just a single file.

For example, say you have an idea for a visually impressive prototype—a 3D graphic for a landing page or a quick tech demo. Instead of spending hours setting up a graphics library and a local dev server, you can describe the entire project in one go and get a working starting point immediately.

Objective: Build a visually stunning, photorealistic 3D Voxel simulation of the Golden Gate Bridge using Three.js, prioritizing quality and complex visuals (no simple blocks),  atmospheric depth and 60FPS performance.

Visuals & Atmosphere:
- Lighting: Slider (0-24h) controlling sun position, light intensity, sky color, and fog color.
- Fog: Volumetric-style fog using sprite particles that drift and bob. Slider 0-100. 0 = True Zero (Crystal Clear). 100 = Dense but realistic (not whiteout).
- Water: Custom GLSL shader with waves, specular reflections, and manual distance-based fog blending (exp2) for seamless horizon integration.
- Post-Processing: ACESFilmic Tone Mapping and UnrealBloom (optimized for glowing lights at night).

Scene Details:
- Bridge: Art Deco towers with concrete piers (anchored to seabed), main span catenary cables, and suspenders.
- Terrain: Low-poly Marin Headlands and SF Peninsula.
- Skyline: Procedural city blocks on the SF side.
- Traffic: Up to 400 cars using `InstancedMesh`, positioned accurately on top of the deck (ensure vertical alignment prevents clipping into the concrete). Each car features emissive headlights (white) and taillights (red).
- Ships: Procedural cargo ships with hull, containers, and functional navigation lights (Port/Starboard/Mast/Cabin) moving along the water.
- Nature: Animated flocking birds.
- Night Mode: At night, activate city lights, car headlights, ship navigation lights, tower beacons, street lights.

Tech & Controls:
- Core: Must output only single HTML file `golden_gate_bridge.html` to be run in a blank Chrome tab. Import Three.js/Addons via CDN map.
  -   `three` (Core library) via CDN (ES Modules).
  -   `three/examples/jsm/...` modules via Import Map.
  -   No build step (Vite/Webpack). Pure HTML/JS.

- UI: Visually appealing sliders for Time (0-24h), Fog Density (0-100%), Traffic Density (0-100%), and Camera Zoom.
- Optimization: `InstancedMesh` for all repetitive elements (cars, lights, birds).

Plain text

Copied

2. Turn a visual idea into a working app

You've sketched a UI and need to translate that visual concept into functional code. You can take a picture of your sketch, and then simply drag and drop the image file into your terminal.

Gemini 3 Pro's multimodal understanding will analyze the drawing, identifying buttons, text boxes, and layout. It then generates the HTML, CSS, and JavaScript code to bring your sketch to life.

Create a UI for "Project Constellation," an internal brand intelligence tool prototype that shows a customer acquisition pipeline. The aesthetic is an ultra-creative, futuristic dark-mode nebula. Luminous, iridescent threads representing customer journeys weave through semi-transparent glass pillars. A sleek, floating data card with Tailwind CSS precision materializes when hovering over a pillar. I've prepared a sketch for you to work from: @sketch.png.

Plain text

Copied

Improve your daily work

While vibe coding demos show off the art of the possible, the true test of a developer tool is how it performs the practical, everyday work you do multiple times a day. Small improvements in these common workflows, like refactoring code, debugging errors, or managing infrastructure, are what create real productivity gains.

This is where Gemini 3 Pro's state-of-the-art reasoning makes a tangible difference. It follows the nuances of complex, multi-part commands with greater precision than ever before, which is essential for the practical, detailed work that defines your day.

Here are a few examples of how Gemini 3 Pro can handle these critical engineering tasks.

3. Generate complex shell commands with natural language

With Gemini CLI, the power of the UNIX command line is available directly through natural language. No need to memorize the obscure syntax and every flag of UNIX commands, simply have Gemini 3 Pro translate your intent and execute it for you. Gemini can then even parse dense formatted output back into natural language for you.

Ask Gemini CLI to handle all the complexity of running Git Bisect for you on the command line, leaving you free to focus on applying your judgement on finding the bug in question.

At some point I lost the commit that set my default theme to dark. 
Find it for me with git bisect and return the hash to me.

Plain text

Copied

4. Generate accurate documentation from your code

Gemini 3 Pro's advanced reasoning allows it to read and understand the logic of your codebase. It doesn't just see syntax; it can investigate and synthesize the purpose of a function, identify its parameters and return values, and translate that complex logic into clear, human-readable language.

This is useful when you’ve introduced a complex application and now need to create the documentation. Instead of manually writing out descriptions, you can have Gemini analyze the code and generate the docs for you in a format that is consistent with your code.

"This is an application that does not have any documentation and we do not have a technical writer. Before you begin, review all of the code. Then make me a user documentation. This document should only explain user facing features, but make sure to explain every single feature such as usage of the app, command line options, authentication options, built in tools, and all other user facing features. For certain features such as MCP or extensions, also explain the topic and concept so that the user has a better understanding. Since this is an open source project, provide an architectural overview of how the code is laid out, a summary of each component, and how they can contribute to the open-source project. The document should be organized and formatted so that it is easy to read and find. Do not make it a single html page. Make sure to add a search feature."

Plain text

Copied

5. Debug performance issue in a live Cloud Run service

Gemini 3 Pro can orchestrate complex workflows across different services that hold your team's context. The improved tool use means it can plan and execute multi-step tasks that require gathering information from several sources—like observability, security, and source control—to solve a single problem.

In this example it connects a serverless platform (Cloud Run) with a popular security scanner (Snyk) using Gemini CLI extensions to find the root cause and suggest a fix, then deploys the fix, turning a complex, multi-tool investigation into a single, streamlined action.

Users are reporting that the "Save Changes" button is slow, investigate the 'tech-stack' service

Plain text

Copied

Learn more today

These examples are just the start. The real potential isn't in running these specific commands, but in how Gemini 3 Pro can adapt to your unique challenges whether you're optimizing daily shell commands, tackling substantial engineering work, or building a workflow personalized to your team's tools. Gemini 3 Pro transforms the command line into an intelligent partner that understands your context.

The best way to see the difference is to try it yourself. Visit the Gemini CLI website , and share your own examples on social with #GeminiCLI. We can't wait to see what you build.

Google Brings Gemini 3 AI Model to Search and AI Mode

Hacker News
blog.google
2025-11-18 16:08:17
Comments...
Original Article

Gemini 3’s state-of-the-art reasoning grasps depth and nuance, and unlocks new generative UI experiences with dynamic visual layouts, interactive tools and simulations tailored specifically for your query.

An image showing a title card and two visual components from the product interface. The top-left visual is a soft-gradient screen showing an abstract, multi-colored network graph or data visualization. The top-right visual is a dynamic ci

Today, we introduced Gemini 3 , our most intelligent model with state-of-the-art reasoning, deep multimodal understanding and powerful agentic capabilities. It’s now available in Google Search, starting with AI Mode — marking the first time we’ve brought a Gemini model to Search on day one. Gemini 3 brings incredible reasoning power to Search because it’s built to grasp unprecedented depth and nuance for your hardest questions. It also unlocks new generative UI experiences so you can get dynamic visual layouts with interactive tools and simulations — generated specifically for you.

Here’s how Gemini 3 is supercharging Search.

Gemini 3: Our most intelligent model, right in Search

Starting today, Google AI Pro and Ultra subscribers in the U.S. can use Gemini 3 Pro, our first model in the Gemini 3 family of models, by selecting “Thinking” from the model drop-down menu in AI Mode. With Gemini 3, you can tackle your toughest questions and learn more interactively because it better understands the intent and nuance of your request. And soon, we’ll bring Gemini 3 in AI Mode to everyone in the U.S. with higher limits for users with the Google AI Pro and Ultra plans.

Thanks to Gemini 3’s advanced reasoning, Google Search’s query fan-out technique is getting a major upgrade. Now, not only can it perform even more searches to uncover relevant web content, but because Gemini more intelligently understands your intent it can find new content that it may have previously missed. This means Search can help you find even more credible, highly relevant content for your specific question.

And in the coming weeks, we’re also enhancing our automatic model selection in Search with Gemini 3. This means Search will intelligently route your most challenging questions in AI Mode and AI Overviews to this frontier model — while continuing to use faster models for simpler tasks. This will be rolling out to Google AI Pro and Ultra subscribers in the U.S.

Generative UI: Visual layouts, interactive tools and simulations in AI Mode

Gemini 3’s unparalleled multimodal understanding and powerful agentic coding capabilities are also unlocking more bespoke generative user interfaces. Now, Gemini 3 in AI Mode can dynamically create the ideal visual layout for responses on the fly — featuring interactive tools and simulations — tailored to your query.

To do this, Gemini 3 analyzes your question and creates the most helpful layout, building a custom response with visual elements — like images, tables and grids — so the final output isn’t just informative, but clear and actionable. When the model detects that an interactive tool will help you better understand the topic, it uses its generative capabilities to code a custom simulation or tool in real-time and adds it into your response.

Say you’re learning about the physics behind the three-body problem. Instead of just reading about it, you can now get an interactive simulation, allowing you to manipulate variables and see the gravitational interactions play out. Or perhaps you're researching mortgage loans: Gemini 3 in AI Mode can make you a custom-built interactive loan calculator directly in the response so you can compare two different options and see which offers the most long-term savings. And to help you continue exploring, all responses have prominent links to high-quality content across the web.

For a deeper dive, check out Google’s foundational generative UI research . We’ll continue to refine the experience over time, and we look forward to your feedback as you start to use these interactive tools and simulations in Search.

Now, it's even easier to ask anything and instantly get a richer, more helpful understanding. We’re excited for you to try this more interactive and capable Search.

gemini_social_croppedin

Related stories

Google Antigravity, a New Era in AI-Assisted Software Development

Hacker News
antigravity.google
2025-11-18 16:06:32
Comments...

How long can it take to become a US citizen?

Hacker News
usafacts.org
2025-11-18 16:06:03
Comments...
Original Article

There’s no single timeline for becoming a US citizen . For some, the journey is a few years; for others, it spans decades. Processing times, federally mandated annual visa limits, and mandatory waiting periods mean that the same goal — citizenship — is obtained in very different ways depending on where a person is from and how they begin their journey.

You are either born into US citizenship, or you follow the lengthy process to become one. That process first requires becoming a lawful permanent resident, better known as a “ green card holder.”

Citizenship and Immigration Services ( USCIS ) only issues green cards to people who meet certain criteria, most often through sponsorship by a family member or employer. The timeline for getting a green card is anything but predictable.

Why is the wait for a green card so unpredictable?

To gain permanent residence, aka a green card, a person must first be approved for an immigrant visa. There are several types of these visas, and some, like those for the spouses, children, and parents of US citizens, are always available.

Others are limited, with only a certain number issued each year, and no more than 7% of those issued visas can go to applicants from any one country. When demand outpaces supply, applicants are placed in a queue to await an available visa.

Each new applicant joins the line behind others seeking the same type of visa and their families. Because spouses and children can also be included on a visa application, each family may receive more than one visa within the annual limit. For applicants from countries with especially high demand, like India, China, Mexico, and the Philippines, that can mean waiting decades to reach the front of the line.

The main paths to a green card

Each green card path follows the same basic steps, but the pace varies. Some of the wait comes from how long it takes officials to review and approve paperwork. The rest comes from waiting for a visa to become available, since only a set number are issued each year.

Green cards

The US government issued about about 1.17 million green cards during fiscal year (FY) 2023.

The US government issued about about 1.17 million green cards during fiscal year (FY) 2023.

The family path to a green card

Family ties are the most common path to a green card. In FY 2023, nearly 65% of new green card holders qualified through a US citizen or lawful permanent resident relative. But how long the process takes depends entirely on who that relative is.

For immediate family of US citizens, such as spouses, parents, and unmarried children under 21, the process is relatively fast. These applicants are not subject to annual visa limits, so once their petition is approved, they can move straight into green card processing.

Other family members face a queue. The US uses a preference system for extended family, with caps that can stretch wait times into decades:

  1. First preference : Unmarried adult children of US citizens
  2. Second preference : Spouses and children of lawful permanent residents
  3. Third preference : Married children of US citizens
  4. Fourth preference : Siblings of US citizens

Those final categories can take the longest. Mexican siblings of US citizens who applied in 2001 – the year that George W. Bush entered the White House – started to become eligible for green cards in September 2025.

The employment path to a green card

Employment is the second most common path to a green card. In FY 2023, 16.7% of new green cards were issued through jobs or job offers in the US, though roughly half of those went to the workers’ spouses and children rather than the employees themselves.

The employment route to a green card requires more steps than the family-sponsored route. Most applicants need a US employer to sponsor them. In many cases, the employer first completes a permanent labor certification, known as a PERM application, through the Labor Department to prove that hiring a foreign worker will not displace qualified US workers. That process includes confirming a government-set wage for the position, advertising the job to US candidates, and documenting the results. Once the Labor Department certifies the application, the employer can petition USCIS for the worker’s green card.

Employment-based visas are divided into three preference categories, each of which is subject to annual visa limits:

  1. First preference : Individuals with extraordinary ability, outstanding professors, or multinational executives. Applicants in this category do not need a PERM labor certification and may self-petition in some cases.
  2. Second preference : Professionals with advanced degrees or exceptional ability. Most require a PERM labor certification, though some may qualify for a national interest waiver (NIW) that allows them to self-petition without an employer.
  3. Third preference : Skilled workers, professionals, and certain other workers. These applicants typically require a PERM labor certification.

The wait time for any of these visas depends on demand and country of origin. Because of the high number of applicants and the annual and per-country limits, applicants from India and China often face waits of 10 years or more.

Some applicants can pay for premium processing to expedite review of initial paperwork, but that does not shorten the overall wait for a visa number. As of 2025, Indian professionals who applied for Second Preference employment visas in 2013 were only just becoming eligible for green cards because of the annual visa limits. For context, that means they applied when Barack Obama was beginning his second presidential term.

The humanitarian aid path to a green card

Refugees and asylees make up a smaller share of new green card holders, and their path to citizenship is among the least predictable. In FY 2023, roughly 8.5% of new green cards were granted through humanitarian protections to people who first arrived as refugees or were granted asylum after reaching the US.

Both groups can apply for a green card one year after receiving refugee or asylum status, but the time it takes to reach that point may vary. The federal government does not publish comprehensive data on how long asylum or refugee cases take from application to approval, making this part of the citizenship journey difficult to measure.

Asylum applications

In 2023, the US received around 945K asylum applications.

In 2023, the US received around 945K asylum applications.

Becoming a naturalized citizen

Green card holders aren’t automatically placed on a path to citizenship. Some choose not to apply even when eligible, but those who do pursue it follow a relatively consistent process:

  • maintain permanent resident status for up to five years
  • meet physical presence and other eligibility requirements
  • pass the required civics and English language tests
  • take the oath of allegiance

Based on FY 2025 processing times, the full journey from receiving a green card to becoming a US citizen can take three to six years.

One goal, many timelines

There’s no single clock for becoming a US citizen. Each path runs on its own timeline shaped by law, policy, paperwork, and chance. The data shown here reflects how long each step in the process took under FY 2025 conditions. But those figures only describe how the system operates now, not how long it will take for those beginning the process today.

Government processing capacity, policy changes, and individual circumstances all affect how long the process takes. For those subject to annual visa caps, the longest wait typically stems from the limits set by law.

There’s no limit on how many people can join the line awaiting a capped visa each year, so those applying now may be entering a much longer queue than those who applied years ago.

Gemini 3 for developers: New reasoning, agentic capabilities

Hacker News
blog.google
2025-11-18 16:04:12
Comments...
Original Article

Whether you’re an experienced developer or a vibe coder, Gemini 3 can help you bring any idea to life.

General summary

Google is launching Gemini 3 Pro, their most intelligent model, which outperforms previous versions in AI benchmarks and coding tasks. You can access it through the Gemini API in Google AI Studio and Vertex AI, or try the Google Antigravity platform for agentic development. Start building now and explore its capabilities in multimodal understanding, visual reasoning, and vibe coding.

Summaries were generated by Google AI. Generative AI is experimental.

Bullet points

  • "Start building with Gemini 3" introduces Google's most intelligent model for bringing any idea to life.
  • Gemini 3 Pro surpasses previous versions in AI benchmarks and excels at agentic workflows and coding tasks.
  • You can use Gemini 3 Pro in Google AI Studio, Vertex AI, and developer tools like Google Antigravity.
  • The model unlocks "vibe coding," letting you create apps from natural language prompts in Google AI Studio.
  • Gemini 3 Pro excels in multimodal understanding, visual reasoning, and spatial reasoning for various applications.

Summaries were generated by Google AI. Generative AI is experimental.

Explore other styles:

Build anything with Gemini 3

Today we are introducing Gemini 3, our most intelligent model that can help bring any idea to life. Built on a foundation of state-of-the-art reasoning, Gemini 3 Pro delivers unparalleled results across every major AI benchmark compared to previous versions. It also surpasses 2.5 Pro at coding, mastering both agentic workflows and complex zero-shot tasks.

Benchmarks table with comparison numbers for Gemini 3 Pro, Gemini 2.5 Pro, Claude Sonnet 4.5 and GPT-5.1

Gemini 3 Pro fits right into existing production agent and coding workflows, while also enabling new use cases not previously possible. It’s available in preview at $2/million input tokens and $12/million output tokens for prompts 200k tokens or less through the Gemini API in Google AI Studio and Vertex AI for enterprises (see pricing for rate limits and full pricing details). Additionally, it can be utilized via your favorite developer tools within the broader ecosystem and is available, with rate limits, free of charge in Google AI Studio .

Agentic coding

Developers are spending more and more time creating software with AI at their side. Building on the momentum of Gemini 2.5 Pro and all the feedback, Gemini 3 Pro serves as a new foundation of intelligence for what’s possible with an agentic coding model.

Gemini 3 Pro scores 54.2% points on Terminal-Bench 2.0, which tests a model’s tool use ability to operate a computer via terminal.

Terminal Bench evaluation charts for Gemini 3 and other AI models

You can feel the power of this model come to life in Google Antigravity , our new agentic development platform, in addition to Gemini CLI , Android Studio , and other coding products like Cursor, GitHub, JetBrains, Manus, Cline and more.

Google Antigravity

To advance how the model and IDE work together, we’re introducing Google Antigravity to showcase what’s possible with Gemini 3. It’s an agentic development platform that enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core.

It’s a faster way to develop: you act as the architect, collaborating with intelligent agents that operate autonomously across the editor, terminal, and browser. These agents plan and execute complex software tasks, communicating their work with the user via detailed artifacts. This elevates all aspects of development, from building features, UI iteration, and fixing bugs to researching and generating reports. Visit the Google Antigravity website to download the public preview at no charge, now available for MacOS, Windows and Linux.

Gemini API

With Gemini 3, we are releasing a client-side bash tool that empowers the model to propose shell commands as part of agentic workflows for tasks such as navigating your local filesystem, driving development processes, and automating system operations. We’re pairing this with a hosted server-side bash tool for multi language code generation and secure prototyping. Available now in the Gemini API for early access partners, with general availability coming soon.

Additionally, Gemini hosted tools Grounding with Google Search and URL context can now be combined with structured outputs. This is especially powerful for building agentic use cases which involve fetching and extracting data and then outputting them in a specific format for downstream agentic tasks.

Vibe coding

Gemini 3 Pro unlocks the true potential of “vibe coding”, where natural language is the only syntax you need. By significantly improving complex instruction following and deep tool use, the model can translate a high-level idea into a fully interactive app with a single prompt. It handles the heavy lifting of multi-step planning and coding details delivering richer visuals and deeper interactivity, allowing you to focus on the creative vision.

Gemini 3 Pro tops the WebDev Arena leaderboard by scoring an impressive 1487 Elo.

Graphic showing 1487 WebDev Arena leaderboard score

Google AI Studio

Whether it’s building a game with a single prompt, an interactive landing page from unstructured voice notes, or a full on app from a napkin sketch, developers can bring their idea to life with Gemini 3. With this model, we pushed single prompt generation capabilities further than ever, meaning you can go from idea to AI-powered app with a single prompt, like this retro game built in Google AI Studio.

We’ve built Google AI Studio to be your fastest path from a prompt to an AI-native app. Build mode lets you add AI capabilities faster than ever, automatically wiring up the right models and APIs, while features like annotations enable fast and intuitive iteration. You can start building with Gemini 3 in Google AI Studio today.

Multimodal understanding

Gemini 3 is the best model in the world for complex multimodal understanding and sets new highs on MMMU-Pro for complex image reasoning and Video MMMU for video understanding. Combining its intelligence and a 1 million-token context window, developers can see significant improvements while building key multimodal use cases. To give you more control over latency and cost, you can now configure multimodal vision processing with more granularity in the Gemini API based on the visual fidelity required for your application.

Visual reasoning

Gemini 3 Pro is best-in-class for document understanding, going beyond simple OCR (Object Character Recognition) to intelligently handle complex document understanding and reasoning.

You can see the model’s vision understanding, reasoning and coding capabilities in our demo app that brings any idea to life in Google AI Studio.

Spatial reasoning

The model’s improved spatial understanding also drives strong performance in embodied reasoning tasks like pointing, trajectory prediction and task progression, unlocking new use cases across autonomous vehicles, XR devices and robotics.

Its spatial reasoning also powers intelligent screen understanding of desktop, mobile and OS screens delivering significant performance improvement for computer use agents. The model also understands the intent of user actions based on mouse movements and screen annotations unlocking novel experiences like this Visual Computer demo app.

Video reasoning

Gemini 3 Pro captures rapid action with high-frame-rate understanding, ensuring developers never miss a critical moment in fast-moving scenes. Beyond speed, long-context recall allows for synthesizing narratives and pinpointing specific details across hours of continuous footage.

Build what’s next, today

Gemini 3 Pro is now integrated into many developer products and tools to seamlessly fit into your existing workflows and unlock entirely new ways to code.

  • Build with the Gemini API: You can integrate Gemini 3 Pro immediately into your applications via Google AI Studio and Vertex AI for Enterprise. To support the model's deeper reasoning capabilities, we’re introducing a new thinking level and more granular media resolution parameters in the API, along with stricter validation for thought signatures . This update is critical for preserving the model’s thoughts across multi-turn conversations. Check out the Developer Guide for the technical breakdown and our Prompting Guide to learn how to build with Gemini 3 Pro.
  • Experience the model’s agentic capabilities: Whether you are adding AI-native features to an Android app, automating workflows through Gemini CLI or managing a fleet of autonomous agents in Google Antigravity , Gemini 3 Pro provides the reliability needed for complex, agentic architectures.
  • Vibe code with Gemini 3 Pro: Google AI Studio is your fastest path to bring any idea to life. Get started in Build mode to generate a fully functional app with a single prompt. And if you need a little inspiration, click “I’m feeling lucky” and let Gemini 3 Pro handle the creative spark and the code implementation simultaneously.

The software landscape is shifting. As AI changes who builds and how they build, we are committed to meeting you where you are — giving you the tools to push the boundaries of what’s possible.

This is just the start of the Gemini 3 era but we can’t wait to see what you build with Gemini 3 Pro!

IRS Accessed Massive Database of Americans Flights Without a Warrant

403 Media
www.404media.co
2025-11-18 16:00:44
A bipartisan letter reveals the IRS searched a database of hundreds of millions of travel records without first conducting a legal review. Airlines like Delta, United, American, and Southwest are selling these records to the government through a co-owned data broker....
Original Article

The IRS accessed a database of hundreds of millions of travel records, which show when and where a specific person flew and the credit card they used, without obtaining a warrant, according to a letter signed by a bipartisan group of lawmakers and shared with 404 Media. The country’s major airlines, including Delta, United Airlines, American Airlines, and Southwest, funnel customer records to a data broker they co-own called the Airlines Reporting Corporation (ARC), which then sells access to peoples’ travel data to government agencies.

The IRS case in the letter is the clearest example yet of how agencies are searching the massive trove of travel data without a search warrant, court order, or similar legal mechanism. Instead, because the data is being sold commercially, agencies are able to simply buy access. In the letter addressed to nine major airlines, the lawmakers urge them to shut down the data selling program.

“Disclosures made by the IRS to Senator Wyden confirm that it did not follow federal law and its own policies in purchasing airline data from ARC,” the letter reads . The letter says the IRS “confirmed that it did not conduct a legal review to determine if the purchase of Americans’ travel data requires a warrant.” The signatories on the letter are Senator Ron Wyden, Congressman Andy Biggs, Chair of the Congressional Hispanic Caucus Adriano Espaillat, and Senator Cynthia Lummis.

The co-owners of ARC include United, American, Delta, Southwest, JetBlue, Alaska, Lufthansa, Air France, and Air Canada, according to the letter. Each of those airlines has a representative on ARC’s board of directors .

💡

Do you know anything else about ARC? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

ARC acts as a conduit between airlines and travel agencies. Whenever someone books a flight through an agency, such as Expedia, ARC receives that booking information. In turn, ARC makes some of that data available to its customers. ARC’s data includes information sourced through more than 12,800 travel agencies, 404 Media previously found in records obtained through the Freedom of Information Act (FOIA). The letter notes ARC also contains records from popular booking sites Kayak and Priceline and those booked through credit card rewards programs.

ARC sells government agencies the ability to search a database of around 722 million ticket transactions, stretching over 39 months of past and future travel data, the letter says. ( Other records have said the figure is in the billions). 404 Media has reported that ARC’s customers include the FBI, Secret Service, CBP, ATF, the SEC, TSA, and the State Department. ICE is also a customer . Users are able to search the database by name, credit card, airline, and more.

A screenshot from the letter.

“Because ARC only has data on tickets booked through travel agencies, government agencies seeking information about Americans who book tickets directly with an airline must issue a subpoena or obtain a court order to obtain those records. But ARC’s data sales still enable government agencies to search through a database containing 50% of all tickets booked without seeking approval from a judge,” the letter reads.

It adds that buying tickets directly from airlines is the most profitable method for the airlines. “As such, this surveillance program also raises serious antitrust concerns. Americans who buy their airplane tickets through third party travel agencies, including by redeeming credit card points through a credit card travel portal, deserve the same privacy protections as tickets booked directly through the airlines, whether with cash or frequent flier points,” it reads.

The database access sold to government agencies is called the Travel Intelligence Program (TIP). An ARC spokesperson previously said TIP was established after the September 11 attacks. “Over the years, TIP has likely contributed to the prevention and apprehension of criminals involved in human trafficking, drug trafficking, money laundering, sex trafficking, national security threats, terrorism and other imminent threats of harm to the United States.”

ARC did not respond to a request for comment. The IRS acknowledged a request for comment but did not provide a statement in time for publication.

“As with other surveillance programs involving the government’s purchase of Americans’ personal data, ARC’s data sales are an end-run around the privacy rights guaranteed by the Fourth Amendment to the Constitution,” the letter reads. “Regardless of whether you approved this practice, or simply failed to stop it, you are directly responsible for this outrageous violation of your customers’ privacy. Your customers deserve far better. Accordingly, we urge you to exercise your authority as the corporate owners of ARC to protect your customers’ privacy and immediately shut down this program. We also urge you to direct ARC to adopt a policy of only turning over Americans’ travel records to the government when legally compelled to do so, except in emergencies.”

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Gemini 3

Hacker News
blog.google
2025-11-18 16:00:33
Comments...
Original Article

Gemini 3 is our most intelligent model that helps you bring any idea to life.

General summary

Google introduces Gemini 3, its most intelligent AI model, enhancing reasoning and multimodal capabilities. You can now access Gemini 3 across Google products like the Gemini app, AI Studio, and Vertex AI. Expect Gemini 3 Deep Think mode for Ultra subscribers soon, with more models to follow.

Summaries were generated by Google AI. Generative AI is experimental.

Bullet points

  • "A new era of intelligence with Gemini 3" introduces Google's latest, most intelligent AI model.
  • Gemini 3 Pro outperforms previous models in reasoning, multimodality, and coding benchmarks.
  • Gemini 3 Deep Think mode pushes the boundaries of intelligence even further for complex problems.
  • You can use Gemini 3 to learn, build, and plan anything with improved reasoning and tool use.
  • Gemini 3 is available now in various Google products, with Deep Think coming soon.

Summaries were generated by Google AI. Generative AI is experimental.

Explore other styles:

Gemini 3 logo in front of a black background with the number 3 in blue dots

A note from Google and Alphabet CEO Sundar Pichai:

Nearly two years ago we kicked off the Gemini era, one of our biggest scientific and product endeavors ever undertaken as a company. Since then, it’s been incredible to see how much people love it. AI Overviews now have 2 billion users every month. The Gemini app surpasses 650 million users per month, more than 70% of our Cloud customers use our AI, 13 million developers have built with our generative models, and that is just a snippet of the impact we’re seeing.

And we’re able to get advanced capabilities to the world faster than ever, thanks to our differentiated full stack approach to AI innovation — from our leading infrastructure to our world-class research and models and tooling, to products that reach billions of people around the world.

Every generation of Gemini has built on the last, enabling you to do more. Gemini 1’s breakthroughs in native multimodality and long context window expanded the kinds of information that could be processed — and how much of it. Gemini 2 laid the foundation for agentic capabilities and pushed the frontiers on reasoning and thinking , helping with more complex tasks and ideas, leading to Gemini 2.5 Pro topping LMArena for over six months.

And now we’re introducing Gemini 3, our most intelligent model, that combines all of Gemini’s capabilities together so you can bring any idea to life.

It’s state-of-the-art in reasoning, built to grasp depth and nuance — whether it’s perceiving the subtle clues in a creative idea, or peeling apart the overlapping layers of a difficult problem. Gemini 3 is also much better at figuring out the context and intent behind your request, so you get what you need with less prompting. It’s amazing to think that in just two years, AI has evolved from simply reading text and images to reading the room.

And starting today, we’re shipping Gemini at the scale of Google. That includes Gemini 3 in AI Mode in Search with more complex reasoning and new dynamic experiences. This is the first time we are shipping Gemini in Search on day one. Gemini 3 is also coming today to the Gemini app , to developers in AI Studio and Vertex AI , and in our new agentic development platform, Google Antigravity — more below.

Like the generations before it, Gemini 3 is once again advancing the state of the art. In this new chapter, we’ll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone.

We hope you like Gemini 3, we'll keep improving it, and look forward to seeing what you build with it. Much more to come!


Introducing Gemini 3: our most intelligent model that helps you bring any idea to life

Demis Hassabis, CEO of Google DeepMind and Koray Kavukcuoglu, CTO of Google DeepMind and Chief AI Architect, Google, on behalf of the Gemini team

Today we’re taking another big step on the path toward AGI and releasing Gemini 3.

It’s the best model in the world for multimodal understanding and our most powerful agentic and vibe coding model yet, delivering richer visualizations and deeper interactivity — all built on a foundation of state-of-the-art reasoning.

We’re beginning the Gemini 3 era by releasing Gemini 3 Pro in preview and making it available today across a suite of Google products so you can use it in your daily life to learn, build and plan anything. We’re also introducing Gemini 3 Deep Think — our enhanced reasoning mode that pushes Gemini 3 performance even further — and giving access to safety testers before making it available to Google AI Ultra subscribers.

State-of-the-art reasoning with unprecedented depth and nuance

Gemini 3 Pro can bring any idea to life with its state-of-the-art reasoning and multimodal capabilities. It significantly outperforms 2.5 Pro on every major AI benchmark.

It tops the LMArena Leaderboard with a breakthrough score of 1501 Elo. It demonstrates PhD-level reasoning with top scores on Humanity’s Last Exam (37.5% without the usage of any tools) and GPQA Diamond (91.9%). It also sets a new standard for frontier models in mathematics, achieving a new state-of-the-art of 23.4% on MathArena Apex .

Beyond text, Gemini 3 Pro redefines multimodal reasoning with 81% on MMMU-Pro and 87.6% on Video-MMMU. It also scores a state-of-the-art 72.1% on SimpleQA Verified, showing great progress on factual accuracy. This means Gemini 3 Pro is highly capable at solving complex problems across a vast array of topics like science and mathematics with a high degree of reliability.

Gemini 3 evaluation table against other AI models

Gemini 3 Pro also brings a new level of depth and nuance to every interaction. Its responses are smart, concise and direct, trading cliché and flattery for genuine insight — telling you what you need to hear, not just what you want to hear. It acts as a true thought partner that gives you new ways to understand information and express yourself, from translating dense scientific concepts by generating code for high-fidelity visualizations to creative brainstorming.

Gemini 3 can code a visualization of plasma flow in a tokamak and write a poem capturing the physics of fusion.

Gemini 3 Deep Think

Gemini 3 Deep Think mode pushes the boundaries of intelligence even further, delivering a step-change in Gemini 3’s reasoning and multimodal understanding capabilities to help you solve even more complex problems.

In testing, Gemini 3 Deep Think outperforms Gemini 3 Pro’s already impressive performance on Humanity’s Last Exam (41.0% without the use of tools) and GPQA Diamond (93.8%). It also achieves an unprecedented 45.1% on ARC-AGI-2 (with code execution, ARC Prize Verified), demonstrating its ability to solve novel challenges.

Gemini 3 Deep Think mode excels on some of the most challenging AI benchmarks. See details on our evaluation methodology .

Evaluation chart for Gemini 3 Deep Think

Gemini 3 helps you learn, build and plan anything

Learn anything

Gemini was built from the start to seamlessly synthesize information about any topic across multiple modalities, including text, images, video, audio and code. Gemini 3 pushes the frontier of multimodal reasoning to help you learn in ways that make sense for you by combining its state-of-the-art reasoning, vision and spatial understanding, leading multilingual performance, and 1 million-token context window.

For example, if you want to learn how to cook in your family tradition, Gemini 3 can decipher and translate handwritten recipes in different languages into a shareable family cookbook. Or if you want to learn about a new topic, you can give it academic papers, long video lectures or tutorials and it can generate code for interactive flashcards, visualizations or other formats that will help you master the material. It can even analyze videos of your pickleball match, identify areas where you can improve and generate a training plan for overall form improvements.

To help you make better sense of information on the web, AI Mode in Search now uses Gemini 3 to enable new generative UI experiences like immersive visual layouts and interactive tools and simulations, all generated completely on the fly based on your query.

Build anything

Building on the success of 2.5 Pro, Gemini 3 delivers on the promise of bringing any idea to life for developers. It’s exceptional at zero-shot generation and handles complex prompts and instructions to render richer, more interactive web UI.

Gemini 3 is the best vibe coding and agentic coding model we’ve ever built – making our products more autonomous and boosting developer productivity. It tops the WebDev Arena leaderboard by scoring an impressive 1487 Elo. It also scores 54.2% on Terminal-Bench 2.0, which tests a model’s tool use ability to operate a computer via terminal and it greatly outperforms 2.5 Pro on SWE-bench Verified (76.2%), a benchmark that measures coding agents.

You can now build with Gemini 3 in Google AI Studio, Vertex AI, Gemini CLI and our new agentic development platform, Google Antigravity. It’s also available in third-party platforms like Cursor, GitHub, JetBrains, Manus, Replit and more.

Introducing a new agent-first development experience

As model intelligence accelerates with Gemini 3, we have the opportunity to reimagine the entire developer experience. Today we’re releasing Google Antigravity , our new agentic development platform that enables developers to operate at a higher, task-oriented level.

Using Gemini 3’s advanced reasoning, tool use and agentic coding capabilities, Google Antigravity transforms AI assistance from a tool in a developer’s toolkit into an active partner. While the core of Google Antigravity is a familiar AI IDE experience, its agents have been elevated to a dedicated surface and given direct access to the editor, terminal and browser. Now, agents can autonomously plan and execute complex, end-to-end software tasks simultaneously on your behalf while validating their own code.

In addition to Gemini 3 Pro, Google Antigravity also comes tightly coupled with our latest Gemini 2.5 Computer Use model for browser control and our top-rated image editing model Nano Banana (Gemini 2.5 Image).

Google Antigravity uses Gemini 3 to drive an end-to-end agentic workflow for a flight tracker app. The agent independently plans, codes the application and validates its execution through browser-based computer use.

Plan anything

Since introducing the agentic era with Gemini 2, we’ve made a lot of progress, not only advancing Gemini’s coding agent abilities, but also improving its ability to reliably plan ahead over longer horizons. Gemini 3 demonstrates this by topping the leaderboard on Vending-Bench 2 , which tests longer horizon planning by managing a simulated vending machine business. Gemini 3 Pro maintains consistent tool usage and decision-making for a full simulated year of operation, driving higher returns without drifting off task.

Gemini 3 Pro demonstrates better long-horizon planning to generate significantly higher returns compared to other frontier models.

Chart showing Gemini 3 Pro outperforming other AI models on long-horizon planning

This means Gemini 3 can better help you get things done in everyday life. By combining deeper reasoning with improved, more consistent tool use, Gemini 3 can take action on your behalf by navigating more complex, multi-step workflows from start to finish — like booking local services or organizing your inbox — all while under your control and guidance.

Google AI Ultra subscribers can try these agentic capabilities in the Gemini app with Gemini Agent today. We’ve learned a lot improving Gemini’s agentic capabilities, and we’re excited to see how you use it as we expand to more Google products soon.

Building Gemini 3 responsibly

Gemini 3 is our most secure model yet, and has undergone the most comprehensive set of safety evaluations of any Google AI model to date. The model shows reduced sycophancy, increased resistance to prompt injections and improved protection against misuse via cyberattacks.

In addition to our in-house testing for the critical domains in our Frontier Safety Framework , we've also partnered on evaluations with world-leading subject matter experts, provided early access to bodies like the UK AISI, and obtained independent assessments from industry experts like Apollo, Vaultis, Dreadnode and more. For more information, see the Gemini 3 model card .

The next era of Gemini

This is just the start of the Gemini 3 era. As of today, Gemini 3 starts rolling out:

  • For everyone in the Gemini app and for Google AI Pro and Ultra subscribers in AI Mode in Search
  • For developers in the Gemini API in AI Studio, our new agentic development platform, Google Antigravity; and Gemini CLI
  • For enterprises in Vertex AI and Gemini Enterprise

For Gemini 3 Deep Think mode, we’re taking extra time for safety evaluations and input from safety testers before making it available to Google AI Ultra subscribers in the coming weeks.

We plan to release additional models to the Gemini 3 series soon so you can do more with AI. We look forward to getting your feedback and seeing what you learn, build and plan with Gemini.

Gemini 3 logo

Speaking Freely: Benjamin Ismail

Electronic Frontier Foundation
www.eff.org
2025-11-18 15:58:55
Interviewer: Jillian York Benjamin Ismail is the Campaign and Advocacy Director for GreatFire, where he leads efforts to expose the censorship apparatus of authoritarian regimes worldwide. He also runs/oversees the App Censorship Project, including the AppleCensorship.com and GoogleCensorship.org pl...
Original Article

Interviewer: Jillian York

Benjamin Ismail is the Campaign and Advocacy Director for GreatFire , where he leads efforts to expose the censorship apparatus of authoritarian regimes worldwide. He also runs/oversees the App Censorship Project, including the AppleCensorship.com and GoogleCensorship.org platforms, which track mobile app censorship globally. From 2011 to 2017, Benjamin headed the Asia-Pacific desk at Reporters Without Borders (RSF).

Jillian York : Hi Benjamin, it's great to chat with you. We got to meet at the Global Gathering recently and we did a short video there and it was wonderful to get to know you a little bit. I'm going to start by asking you my first basic question: What does free speech or free expression mean to you?

Benjamin Ismail : Well, it starts with a very, very big question. What I have in mind is a cliche answer, but it's what I genuinely believe. I think about all freedoms. So when you say free expression, free speech, or freedom of information or Article 19, all of those concepts are linked together, I immediately think of all human rights at once. Because what I have seen during my current or past work is how that freedom is really the cornerstone of all freedom. If you don’t have that, you can’t have any other freedom. If you don’t have freedom of expression, if you don't have journalism, you don't have pluralism of opinions—you have self-censorship.

You have realities, violations, that exist but are not talked about, and are not exposed, not revealed, not tackled, and nothing is really improved without that first freedom. I also think about Myanmar because I remember going there in 2012, when the country had just opened after the democratic revolution. We got the chance to meet with many officials, ministers, and we got to tell them that they should start with that because their speech was “don’t worry, don’t raise freedom of speech, freedom of the press will come in due time.”

And we were saying “no, that’s not how it works!” It doesn’t come in due time when other things are being worked on. It starts with that so you can work on other things. And so I remember very well those meetings and how actually, unfortunately, the key issues that re-emerged afterwards in the country were precisely due to the fact that they failed to truly implement free speech protections when the country started opening.

JY: What was your path to this work?

BI : This is a multi-faceted answer. So, I was studying Chinese language and civilization at the National Institute of Oriental Languages and Civilizations in Paris along with political science and international law. When I started that line of study, I considered maybe becoming a diplomat…that program led to preparing for the exams required to enter the diplomatic corps in France.

But I also heard negative feedback on the Ministry of Foreign Affairs and, notably, first-hand testimonies from friends and fellow students who had done internships there. I already knew that I had a little bit of an issue with authority. My experience as an assistant at Reporters Without Borders challenged the preconceptions I had about NGOs and civil society organizations in general. I was a bit lucky to come at a time when the organization was really trying to find its new direction, its new inspiration. So it a brief phase where the organization itself was hungry for new ideas.

Being young and not very experienced, I was invited to share my inputs, my views—among many others of course. I saw that you can influence an organization’s direction, actions, and strategy, and see the materialization of those strategic choices. Such as launching a campaign, setting priorities, and deciding how to tackle issues like freedom of information, and the protection of journalists in various contexts.

That really motivated me and I realized that I would have much less to say if I had joined an institution such as the Ministry of Foreign Affairs. Instead, I was part of a human-sized group, about thirty-plus employees working together in one big open space in Paris.

After that experience I set my mind on joining the civil society sector, focusing on freedom of the press. on journalistic issues, you get to touch on many different issues in many different regions, and I really like that. So even though it’s kind of monothematic, it's a single topic that's encompassing everything at the same time.

I was dealing with safety issues for Pakistani journalists threatened by the Taliban. At the same time I followed journalists pressured by corporations such as TEPCO and the government in Japan for covering nuclear issues. I got to touch on many topics through the work of the people we were defending and helping. That’s what really locked me onto this specific human right.

I already had my interest when I was studying in political and civil rights, but after that first experience, at the end of 2010, I went to China and got called by Reporters Without Borders . They told me that the head of the Asia desk was leaving and invited me to apply for the position. At that time, I was in Shanghai, working to settle down there. The alternative was accepting a job that would take me back to Paris but likely close the door on any return to China. Once you start giving interviews to outlets like the BBC and CNN, well… you know how that goes—RSF was not viewed favorably in many countries. Eventually, I decided it was a huge opportunity, so I accepted the job and went back to Paris, and from then on I was fully committed to that issue.

JY: For our readers, tell us what the timeline of this was.

BI : I finished my studies in 2009. I did my internship with Reporters Without Borders that year and continued to work pro bono for the organization on the Chinese website in 2010. Then I went to China, and in January 2011, I was contacted by Reporters without Borders about the departure of the former head of the Asia Pacific Desk.

I did my first and last fact-finding mission in China, and went to Beijing. I met the artist Ai Weiwei in Beijing just a few weeks before he was arrested, around March 2011, and finally flew back to Paris and started heading the Asia desk. I left the organization in 2017.

JY: Such an amazing story. I’d love to hear more about the work that you do now.

BI: The story of the work I do now actually starts in 2011. That was my first year heading the Asia Pacific Desk. That same year, a group of anonymous activists based in China started a group called GreatFire . They launched their project with a website where you can type any URL you want and that website will test the connection from mainland China to that URL and tell you know if it’s accessible or blocked. They also kept the test records so that you can look at the history of the blocking of a specific website, which is great. That was GreatFire’s first project for monitoring web censorship in mainland China.

We started exchanging information, working on the issue of censorship in China. They continued to develop more projects which I tried to highlight as well . I also helped them to secure some funding. At the very beginning, they were working on these things as a side job. And progressively they managed to get some funding, which was very difficult because of the anonymity.

One of the things I remember is that I helped them get some funding from the EU through a mechanism called “Small Grants”, where every grant would be around €20- 30,000. The EU, you know, is a bureaucratic entity and they were demanding some paperwork and documents. But I was telling them that they wouldn’t be able to get the real names of the people working at GreatFire, but that they should not be concerned about that because, what they wanted was to finance that tool. So if we were to show them that the people they were going to send the money to were actually the people controlling that website, then it would be fine. And so we featured a little EU logo just for one day, I think on the footer of the website so they could check that. And that’s how we convinced the EU to support GreatFire for that work. Also, there's this tactic called “ Collateral Freedom ” that GreatFire uses very well.

The idea is that you host sensitive content on HTTPS servers that belong to companies which also operate inside China and are accessible there. Because it’s HTTPS, the connection is encrypted, so the authorities can’t just block a specific page—they can’t see exactly which page is being accessed. To block it, they’d have to block the entire service. Now, they can do that, but it comes at a higher political and economic cost, because it means disrupting access to other things hosted on that same service—like banks or major businesses. That’s why it’s called “collateral freedom”: you’re basically forcing the authorities to risk broader collateral damage if they want to censor your content.

When I was working for RSF, I proposed that we replicate that tactic on the 12th of March—that's the World Day against Cyber Censorship . We had the habit of publishing what we called the “ enemies of the Internet ” report, where we would highlight and update the situation on the countries which were carrying out the harshest repression online; countries like Iran, Turkmenistan, North Korea, and of course, China. I suggested in a team meeting: “what if we highlighted the good guys? Maybe we could highlight 10 exiled media and use collateral freedom to uncensor those. And so we did: some Iranian media, Egyptian media, Chinese media, Turkmen media were uncensored using mirrors hosted on https servers owned by big, and thus harder to block, companies...and that’s how we started to do collateral freedom and it continued to be an annual thing.

I also helped in my personal capacity, including after I left Reporters Without Borders. After I left RSF, I joined another NGO focusing on China, which I knew also from my time at RSF. I worked with that group for a year and a half; GreatFire contacted me to work on a website specifically. So here we are, at the beginning of 2020, they had just started this website called Applecensorship.com that allowed users to test availability of any app in any of Apple’s 175 App Stores worldwide They needed a better website—one that allowed advocacy content—for that tool.

The idea was to make a website useful for academics doing research, journalists investigating app store censorship and control and human rights NGOs, civil society organizations interested in the availability of any tools. Apple’s censorship in China started quickly after the company entered the Chinese market, in 2010.

In 2013, one of the projects by GreatFire which had been turned into an iOS app was removed by Apple 48 hours after its release on the App Store, at the demand of the Chinese authorities. That project was Free Weibo , which is a website which features censored posts from Weibo, the Chinese equivalent of Twitter—we crawl social media and detect censored posts and republish them on the site. In 2017 it was reported that Apple had removed all VPNs from the Chinese app store.

So between that episode in 2013, and the growing censorship of Apple in China (and in other places too) led to the creation of AppleCensorship in 2019. GreatFire asked me to work on that website. The transformation into an advocacy platform was successful. I then started working full time on that project, which has since evolved into the App Censorship Project, which includes another website, googlecensorship.org (offering features similar to Applecensorship.com but for the 224 Play Stores worldwide). In the meantime, I became the head of campaigns and advocacy, because of my background at RSF.

JY: I want to ask you, looking beyond China, what are some other places in the world that you're concerned about at the moment, whether on a professional basis, but also maybe just as a person. What are you seeing right now in terms of global trends around free expression that worry you?

BI : I think, like everyone else, that what we're seeing in Western democracies—in the US and even in Europe—is concerning. But I'm still more concerned about authoritarian regimes than about our democracies. Maybe it's a case of not learning my lesson or of naive optimism, but I'm still more concerned about China and Russia than I am about what I see in France, the UK, or the US.

There has been some recent reporting about China developing very advanced censorship and surveillance technologies and exporting them to other countries like Myanmar and Pakistan. What we’re seeing in Russia—I’m not an expert on that region, but we heard experts saying back in 2022 that Russia was trying to increase its censorship and control, but that it couldn’t become like China because China had exerted control over its internet from the very beginning: They removed Facebook back in 2009, then Google was pushed away by the authorities (and the market). And the Chinese authorities successfully filled the gaps left by the absence of those foreign Western companies.

Some researchers working on Russia were saying that it wasn’t really possible for Russia to do what China had done because it was unprepared and that China had engineered it for more than a decade. What we are seeing now is that Russia is close to being able to close its Internet, to close the country, to replace services by its own controlled ones. It’s not identical, but it’s also kind of replicating what China has been doing. And that’s a very sad observation to make.

Beyond the digital, the issue of how far Putin is willing to go in escalating concerns. As a human being and an inhabitant of the European continent, I’m concerned by the ability of a country like Russia to isolate itself while waging a war. Russia is engaged in a real war and at the same time is able to completely digitally close down the country. Between that and the example of China exporting censorship, I’m not far from thinking that in ten or twenty years we’ll have a completely splintered internet.

JY : Do you feel like having a global perspective like this has changed or reshaped your views in any way?

BI : Yes, in the sense that when you start working with international organizations, and you start hearing about the world and how human rights are universal values, and you get to meet people and go to different countries, you really get to experience how universal those freedoms and aspirations are. When I worked RSF and lobbied governments to pass a good law or abolish a repressive one, or when I worked on a case of a jailed journalist or blogger, I got to talk to authorities and to hear weird justifications from certain governments (not mentioning any names but Myanmar and Vietnam) like “those populations are different from the French” and I would receive pushback that the ideas of freedoms I was describing were not applicable to their societies. It’s a bit destabilizing when you hear that for the first time. But as you gain experience, you can clearly explain why human rights are universal and why different populations shouldn’t be ruled differently when it comes to human rights.

Everyone wants to be free. This notion of “universality” is comforting because when you’re working for something universal, the argument is there. The freedoms you defend can’t be challenged in principle, because everyone wants them. If governments and authorities really listened to their people, they would hear them calling for those rights and freedoms.

Or that’s what I used to think. Now we hear this growing rhetoric that we (people from the West) are exporting democracy, that it’s a western value, and not a universal one. This discourse, notably developed by Xi Jinping in China, “Western democracy” as a new concept— is a complete fallacy. Democracy was invented in the West, but democracy is universal. Unfortunately, I now believe that, in the future, we will have to justify and argue much more strongly for the universality of concepts like democracy, human rights and fundamental freedoms.

JY : Thank you so much for this insight. And now for our final question: Do you have a free speech hero?

BI : No.

JY : No? No heroes? An inspiration maybe.

BI : On the contrary, I’ve been disappointed so much by certain figures that were presented as human rights heroes…Like Aung San Suu Kyi during the Rohingya crisis, on which I worked when I was at RSF.

Myanmar officially recognizes 135 ethnic groups, but somehow this one additional ethnic minority (the Rohingya ) is impossible for them to accept. It’s appalling. It’s weird to say, but some heroes are not really good people either. Being a hero is doing a heroic action, but people who do heroic actions can also do very bad things before or after, at a different level. They can be terrible persons, husbands or friends and be a “human rights” hero at the same time.

Some people really inspired me but they’re not public figures. They are freedom fighters, but they are not “heroes”. They remain in the shadows. I know their struggles; I see their determination, their conviction, and how their personal lives align with their role as freedom fighters. These are the people who truly inspire me.

Jessica Tisch Has Earned Her Seat at Eric Adams's Table of Success

hellgate
hellgatenyc.com
2025-11-18 15:48:36
The commissioner is a “competent” top cop with bottomless pockets—and an eye trained on Gracie Mansion?...
Original Article

We just updated the Table of Success for the final time, and we would have been remiss not to include the current (and future?) NYPD Commissioner Jessica Tisch, who's branded herself as a competent bureaucrat—rooting out corruption in a department of grifters, dilettantes and sycophants—empowered by Mayor Eric Adams. But is that the whole story? Read her entry below and decide for yourself.

Jessica Tisch and Eric Adams have a lot more in common than one might expect for a hotel heiress who grew up on the Upper East Side and a cop from Brownsville. They've both proclaimed they are passionate about public service , hate rats and bail reform , and support a policing strategy that revolves around quality-of-life enforcement : cracking down on sex workers, e-bikes, homeless New Yorkers, and street vendors. That's probably why Adams tapped Tisch in November 2024 to be his fourth and final NYPD commissioner—that, and her reputation as a competent bureaucrat, an archetype largely absent from the Adams administration and especially the scandal-ridden NYPD .

As for why Tisch said yes, well, a stint as NYPD commissioner—even in a department as scandal-marred as Adams's—is yet another stepping stone in Tisch's rise in City government, which could eventually go, if the buzz is to be believed , all the way to Gracie Mansion. But despite her promises of departmental reform and accountability for cops who engage in misconduct, and the veneer of respectability she lends the department, her tenure has largely been more of the same: cracking down on the city's most vulnerable residents, serving the interests of the powerful, and helping cops who hurt New Yorkers dodge accountability.

Tisch was born with the proverbial silver spoon in her mouth, part of the third generation of the Loews Corporation business dynasty built in 1940s Brooklyn by a pair of entrepreneurial brothers (that's her family's name on New York University's arts school, just one of their many philanthropic endeavors; others include a wing at the Met, a children's zoo in Central Park, and a hospital in the NYU Langone system). She has two brothers: One is the current president and CEO of Loews, and the other launched his own hedge fund in 2023. But Tisch has always had a higher calling, she's said, a noblesse oblige to commit her life to public service in New York City.

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Google Antigravity

Hacker News
antigravity.google
2025-11-18 15:47:38
Comments...

[$] The current state of Linux architecture support

Linux Weekly News
lwn.net
2025-11-18 15:28:55
There have been several recent announcements about Linux distributions changing the list of architectures they support, or adjusting how they build binaries for some versions of those architectures. Ubuntu introduced architecture variants, Fedora considered dropping support for i686 but reversed co...
Original Article

The page you have tried to view ( The current state of Linux architecture support ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 4, 2025)

Cloudflare outage causes error messages across the internet

Guardian
www.theguardian.com
2025-11-18 15:11:51
US company that defends millions of websites against malicious attacks says it believes issue ‘is now resolved’Explainer: What is Cloudflare?A key piece of the internet’s usually hidden infrastructure suffered a global outage on Tuesday, causing error messages to flash up across websites. Cloudflare...
Original Article

A key piece of the internet’s usually hidden infrastructure suffered a global outage on Tuesday, causing error messages to flash up across websites.

Cloudflare, a US company whose services include defending millions of websites against malicious attacks, experienced an unidentified problem that meant internet users could not access some of its customers’ websites.

Some site owners could not access their performance dashboards. Sites including X and OpenAI suffered increased outages at the same time as Cloudflare’s problems, according to Downdetector .

The outage was reported at 11.48am London time and by 2.48pm the company said: “A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal.”

A spokesperson for Cloudflare apologised to its customers “and the internet in general for leting you down today”. They said: “We will learn from today’s incident and improve.”

As it tried to fix the problem the company disabled an encryption service called Warp in London and said: “Users in London trying to access the internet via Warp will see a failure to connect.”

Cloudflare was described as “the biggest company you’ve never heard of” by Prof Alan Woodward of the Surrey Centre for Cyber Security. The company says it provides services to “protect your websites, apps, APIs, and AI workloads while accelerating performance”.

Woodward described it as a “gatekeeper” and said its roles included monitoring traffic to sites to defend them against distributed denial of service attacks when malicious actors try to overwhelm sites with requests. It also checks users are human.

After finding a fix, Cloudfare said the root cause “was a configuration file that is automatically generated to manage threat traffic”.

This file grew beyond its expected size and triggered a crash in the software system that handles traffic for a number of Cloudflare’s services.

skip past newsletter promotion

“To be clear, there is no evidence that this was the result of an attack or caused by malicious activity,” a spokesperson said. “We expect that some Cloudflare services will be briefly degraded as traffic naturally spikes post-incident but we expect all services to return to normal in the next few hours.”

The problems at Cloudflare come less than a month after an outage of Amazon Web Services brought down thousands of sites.

“We’re seeing how few of these companies there are in the infrastructure of the internet, so that when one of them fails it becomes really obvious quickly,” Woodward said.

Gemini 3 Pro Preview Live in AI Studio

Hacker News
aistudio.google.com
2025-11-18 15:09:38
Comments...
Original Article

Google

Use your Google Account

Email or phone

Forgot email?

Not your computer? Use a private browsing window to sign in. Learn more about using Guest mode

Contractor Paying Random People $300 to Physically Track Immigrants for ICE

403 Media
www.404media.co
2025-11-18 15:05:59
“The more I listened to it, the more I’m like, something doesn’t sound right,” a person who was briefed on the pilot plans told 404 Media....
Original Article

A current pilot project aims to pay former law enforcement and military officers to physically track immigrants and verify their addresses to give to ICE for $300 each. There is no indication that the pilot involves licensed private investigators, and appears to be open to people who are now essentially members of the general public, 404 Media has learned.

The pilot is a dramatic, and potentially dangerous, escalation in the Trump administration’s mass deportation campaign. People without any official role in government would be tasked with tracking down targets for ICE. It appears to be part of ICE’s broader plan to use bounty hunters or skip tracers to confirm immigrant’s addresses through data and physical surveillance. Some potential candidates for the pilot were recruited on LinkedIn and were told they would be given vehicles to monitor the targets.

“The more I listened to it, the more I’m like, something doesn’t sound right,” a person who was briefed on the pilot plans told 404 Media. 404 Media granted multiple people anonymity to speak more candidly and to avoid retaliation.

💡

Do you know anything else about ICE's plan to hire skip tracers or similar? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

In a LinkedIn post in October, Jim Brown, president of government contractor consultant Feds United and a former longtime senior ICE official, said he was looking for retired law enforcement or military personnel for an upcoming project.

“Feds United is seeking approximately 20 more retired experienced law enforcement officers or retired military personnel in the DC/Northern Virginia area to participate in a 90-day pilot project that is expected to kick off within the next few weeks,” he wrote.

“The project will assess whether contractors can validate addresses associated with subjects of interest. Participants will work on surveillance teams and this is an OBSERVE and REPORT only task,” Brown wrote. Nearly two dozen people replied to that post, with many of them expressing interest in the work.

Brown’s LinkedIn post did not mention ICE, but two people briefed on the plans said the work would entail verifying information for ICE.

A screenshot of Brown's LinkedIn post.

Feds United’s website says it is a “client-focused federal consulting firm that supplies subject matter experts to federal contractors to assist them in proposal response development and completing contract delivery services to the government client.” It claims to offer “subject matter experts” from ICE, Customs and Border Protection (CBP), the Secret Service, and the FBI.

Recently on LinkedIn Brown has been posting positively about ICE’s Enforcement and Removal Operations (ERO), and specifically the agency’s arrest of convicted criminals in the country illegally. Immigrants with no criminal record are now the largest group in ICE detention, according to data from September .

Brown said that ICE does not have good addresses for some of its targets, one person briefed on the plans recalled. Feds United would give recruited individuals a list of addresses based on things like utility bills, the person said. Feds United would split the people into teams to perform the surveillance, and after verifying the target lived at the address each person on the team would be paid $300, they added. This would go up to a maximum of $30,000, they said.

“Do not talk to the neighbors,” the person said, recalling the plans. “This was strictly supposed to be observe and report,” referring to a tactic where they are not supposed to directly interact with anyone.

Broadly these details of the pilot line up with ICE’s strategy laid out in procurement documents reported in the media and reviewed by 404 Media. At the end of October, ICE published a Request for Information (RFI) asking interested contractors to contact the agency. Companies would be given information on 10,000 immigrants to locate, with further packages going up to 1,000,000, the Intercept reported . Contractors would be paid “monetary bonuses” based on performance, the document said.

This month 404 Media reported that ICE has allocated $180 million to hiring bounty hunters and skip tracers to stalk immigrants. Other procurement documents said ICE was seeking assistance with a “docket size” of 1.5 million, and the agency would give contractors a batch of 50,000 last known addresses of aliens residing in the U.S. Bounty hunters or skip tracers would then verify the people who lived at those addresses, or find their new location, and provide that information to ICE’s ERO.

“To achieve a higher level of confidence, the vendor may physically verify the alien’s location and presence, preferably confirming their home or work location. The vendor will then report the physical location to the Government or inform the Government that it is not able to locate the alien, and any additional visits would be fruitless. The vendor should prioritize locating the home address and only resort to employment location, failing that,” one of the documents said.

“It is outrageous that ICE is funneling taxpayer money into a surveillance operation aimed at immigrants instead of real threats. It is as wasteful as it is disgraceful,” Congressional Hispanic Caucus Chairman Rep. Adriano Espaillat told 404 Media in a statement. “Every crime that goes uninvestigated is on this administration for diverting law enforcement capacity toward Stephen Miller’s political fantasies rather than true public safety.”

Private investigators and skip tracers 404 Media spoke to had mixed reactions to ICE’s plan. One was concerned about the outsourcing of government functions to private industry, while another said they would do the work.

One of the people briefed on the Virginia and DC pilot said Feds United was subcontracting under SOS International LLC, or SOSi, which is a large government contractor. In October the Department of Homeland Security (DHS) signed a $7 million contract with SOSi for skip tracing services, The Lever reported .

“I do not comment on current projects where I am not the prime vendor,” Brown from Feds United told 404 Media. SOSi did not respond to a request for comment. When asked specifically if SOSi would be able to comment on the pilot, Brown said “after my years of federal training, my response is ‘I cannot confirm nor deny who my client is.’”

None of the people briefed on the plan who spoke to 404 Media are licensed private investigators. In Virginia, private investigators must apply and be registered with the state’s Department of Criminal Justice Services. In DC, private investigators and security professionals similarly need to apply for a license . But in Feds United’s case, the company appears to be trying to recruit people simply on whether they are former military or law enforcement, despite them being asked to perform physical surveillance of targets.

“It’s probably because of the surge of work that they send out these unlicensed individuals to see how they do and eventually they plan to roll them in under their company license of the general contractor,” Igor Ostrovskiy, an experienced private investigator with Ostro Intelligence, and who has expressed concerns with ICE’s plans, told 404 Media. He called the plan dangerous, especially if the people are armed.

“I’ve done large contracts [...] and it just didn’t track,” one of the other people briefed on the plans said.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Announcing Lix 2.94 “Açaí na tigela”

Lobsters
lix.systems
2025-11-18 15:01:46
Comments...
Original Article

We at the Lix team are proud to announce our fifth major release, version 2.94 “Açaí na tigela”.

This release focuses on bugfixes, quality-of-life improvements, performance improvements and started integrating Lix with the Cap’n’Proto remote procedure call runtime, in order to replace the previous bespoke implementation.

Açaí na tigela is a sweet Brazilian snack food from Pará and Amazonas, made with the frozen and mashed fruit of the açaí palm.

Lix is a Nix implementation focused on reliability, predictability, friendliness, developed by a community of people from around the world. We have long term plans to incrementally evolve Nix to work in more places, to make it more reliable and secure, and to update the language and semantics to correct past mistakes and reduce errors, all the while providing an amazing tooling experience.

Upgrading from CppNix or previous Lix versions

The upgrade procedure depends on how you installed Lix or CppNix, and is fully described in the Lix installation guide .

If you are using Lix from nixpkgs on NixOS, you just need to upgrade your nixpkgs once the upgrade pull request has passed through the build farm into your channel; no other action is required.

If you want to help us test the next version of Lix, consider running main by following the beta guide .

Changes

Lix 2.94 builds on the work from Lix 2.93 in improving the daemon and language to make room for future evolution.

This release took longer than usual due to non-trivial fallouts from the CVE mitigation plus the team taking some time off.

Here are the highlights from the release notes . This is not a comprehensive list, and we are thankful for every contributor’s hard work in making this release happen.

News from RPC

As mentioned in previous communications, Lix pursues the goal of delivering a reasonable RPC protocol that replaces the bespoke and obsolete Nix daemon protocol.

Building on top of KJ was chosen because it provides access to Cap’n Proto and gives us a well-tested RPC substrate.

Lix 2.94.0 now ships Cap’n Proto protocol definitions for build hooks and for logging .

Build hooks are used during remote builds. When Lix performs a remote build ( nix __build-remote ), it spawns a hook program. This hook instance is a Cap’n Proto RPC server that speaks the new protocol.

This subsystem has been the first target of the ongoing RPC work.

These changes will be mostly invisible to users. The main visible improvement is that multiple build hook processes may now wait concurrently. In the old protocol only one could wait at a time.

Flakes enters freeze period

As announced in https://wiki.lix.systems/books/lix-contributors/page/flakes-feature-freeze , Flakes-related changes are now frozen unless a core team member grants an explicit exception (mostly for bugfixes).

The Lix project has received many Flakes-related changes in the past, often driven by the CppNix project. The quality of these changes did not match the usual Lix standards and forced the core team to spend considerable effort evaluating their interactions with the existing feature set. Several inconsistency issues slipped through review. This is unsurprising because Flakes remain an experimental feature with semantics that change in practice.

Now that there are at least three separate implementations of Flakes, the Lix project cannot reasonably maintain a third flavor inside core.

The Flakes implementation in the Lix codebase has also been a recurring source of maintenance headaches.

We intend to remove Flakes from the core entirely and ship them as a plugin that is included by default.

Future Flakes improvements can then happen in that subproject without affecting Lix core.

Extracting Flakes is a 2.95.0 objective.

If you are confident with C++, please consider helping us with this migration.

Breaking changes

A significant amount of technical debt has been cleared to allow safer evolution of Lix.

Language

  • Lix strings may now contain NUL bytes

    Lix can now manipulate strings that contain arbitrary binary data, including NUL bytes. The previous behavior was inconsistent and unintentional. Examples in the release notes show where this caused incorrect behavior.

    Many thanks to eldritch horrors for this.

  • Function equality semantics are more consistent, but still bad

    Functions always compare as not equal in Nixlang except when they come from the same memory location. This optimization exists to speed up comparisons of large attribute sets and had to be extended to functions stored inside attribute sets.

    While reworking the evaluator, Lix made this behavior more consistent, although still undesirable.

    For example: let s.f = f; in s.f == s.f now evaluates to true .

    Lix intends to remove this optimization later.

    Function equality is undefined behavior and should not be relied upon in portable Nixlang code.

    Many thanks to eldritch horrors for this.

Stores and builds

  • Remove support for daemon protocols before 2.18

    This affects clients connecting to the local daemon socket or remote builders configured using the ssh-ng protocol. Builders using the ssh protocol are still supported for older clients such as Nix 2.3.

    Maintaining these older protocols required too much effort and lacked test coverage.

    Many thanks to eldritch horrors for this.

  • Remove impure derivations and dynamic derivations

    The impure-derivations and dynamic-derivations experimental features are removed.

    New impure or dynamic derivations can no longer be created. Existing ones cannot be read or built. Their outputs remain valid until garbage collected. The .drv files may only be garbage collected.

    Many thanks to eldritch horrors for this.

  • A new cgroup delegation model for the cgroups experimental feature

    Builds using cgroups ( use-cgroups = true and experimental-features = cgroups ) now always receive a delegated cgroup tree with permission to manage controllers in that subtree.

    This can cause visible breakage because the build process (daemon or direct store access) now requires to be run inside a cgroup tree that was already delegated by the caller: service manager or system administrator for example.

    The uid-range experimental feature now depends on cgroups .

    The release notes contain guidance on setting up the tree and working around issues if you get stuck.

    Many thanks to Raito Bezarius , eldritch horrors and lheckemann for this.

  • Fixed output derivations now run using pasta network isolation

    Following the mitigation of CVE-2025-46416, fixed-output derivations are now isolated from the host network using Pasta.

    This is a breaking change. We learned a number of operational details while deploying Pasta at scale .

    If your DNS setup is healthy (first server in /etc/resolv.conf responds quickly) and the derivation only needs TCP or UDP, this change should not affect you.

    Many thanks to eldritch horrors and puck .

  • Enable zstd with a high compression level instead of xz for binary cache uploads

    Binary cache uploads now use zstd instead of xz. This significantly improves upload time on modern systems and high-speed links, enabling gigabit link saturation while uploading to fast Garage S3 implementations.

    The release notes contain a typo: reducing runtime from 77 seconds to 18 seconds is about a 75 % improvement, not 50 %.

    On a 4.4GB NAR file, uploads can be 75 % faster at the cost of roughly 18 % larger output.

    Many thanks to eldritch horrors and Raito Bezarius .

Features

  • HTTP/3

    Lix can now fetch from binary caches using HTTP/3 when supported by both the server and the local curl stack. This can reduce latency, although throughput may vary as described in https://daniel.haxx.se/blog/2024/06/10/http-3-in-curl-mid-2024/ .

    HTTP/3 is off by default.

    Many thanks to Raito Bezarius and eldritch horrors .

  • Experimental integer coercion

    Lix adds an experimental feature that allows integers to be coerced where strings were previously required. This reduces boilerplate but changes language semantics, so it is off by default.

    Many thanks to Raito Bezarius , eldritch horrors , delroth and winter .

  • nix-eval-jobs no-instantiate

    nix-eval-jobs now supports --no-instantiate , skipping derivation instantiation and improving performance of large evaluations.

    Many thanks to Ma27 and mic92 .

  • Hyperlinks in attribute set printing

    The attribute set printer used by nix repl and in type errors now prints hyperlinks on attribute names to their definition sites when known.

    Example:

    $ nix eval -f '<nixpkgs>' lib.licenses.mit
    { deprecated = false; free = true; fullName = "MIT License"; redistributable = true; shortName = "mit"; spdxId = "MIT"; url = "https://spdx.org/licenses/MIT.html"; }
    

    Many thanks to jade for this.

Improvements

Debugging

  • Ctrl-C rework

    Interrupt handling has been improved so Ctrl-C behaves predictably across long evaluations and daemon interactions.

    One Ctrl-C requests a graceful shutdown. A second Ctrl-C aborts immediately with no guarantee of data integrity.

    ❯ nix-instantiate --eval --expr 'let f = n: if n == 0 then 0 else f (n - 1) + f (n - 1); in f 32'
     ^CStill shutting down. Press ^C again to abort all operations immediately.
     ^C
    
     ❌130 ❯
    

    Many thanks to eldritch horrors .

  • Stack traces now summarize involved derivations at the bottom

    Evaluation stack traces now end with a summary that collects derivations involved in the error, which helps identify which package triggered the failure in a dependency tripping an assertion such as unsupported, insecure or broken derivations.

    error:
         … while calling the 'head' builtin
           at /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/lib/attrsets.nix:1701:13:
           1700|           if length values == 1 || pred here (elemAt values 1) (head values) then
           1701|             head values
               |             ^
           1702|           else
    
         … while evaluating the attribute 'value'
           at /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/lib/modules.nix:1118:7:
           1117|     // {
           1118|       value = addErrorContext "while evaluating the option `${showOption loc}':" value;
               |       ^
           1119|       inherit (res.defsFinal') highestPrio;
    
         (stack trace truncated; use '--show-trace' to show the full trace)
    
         error: Package ‘olm-3.2.16’ in /nix/store/9v6qa656sq3xc58vkxslqy646p0ajj61-source/pkgs/by-name/ol/olm/package.nix:37 is marked as insecure, refusing to evaluate.
    
    
         < -snip the whole explanation about olm's CVEs- >
    
    
         note: trace involved the following derivations:
         derivation 'etc'
         derivation 'dbus-1'
         derivation 'system-path'
         derivation 'nheko-0.12.1'
         derivation 'mtxclient-0.10.1'
    

    Many thanks to Qyriad .

Debugging builds

  • --keep-failed chown the build dir to the invoking user

    When using --keep-failed or keep-failed = true , Lix now reliably changes ownership of the failed build directory to the user who requested the build, including through the daemon.

    Many thanks to eldritch horrors .

  • Keeping mismatching fixed-output derivation artifacts

    When fixed-output derivations fail because the produced output does not match the expected hash, both paths are printed. The offending output is added to the store so that you can inspect it, compute a new hash, or fetch a known-good output for comparison.

    Many thanks to lheckemann .

  • Show tree with references that lead to an output cycle

    Output cycles now include a reference tree showing exactly how the cycle arose.

    Example:

    error: cycle detected in build of '/nix/store/gc5h2whz3rylpf34n99nswvqgkjkigmy-demo.drv' in the references of output 'bar' from output 'foo'.
    
         Shown below are the files inside the outputs leading to the cycle:
         /nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar
         └───lib/libfoo: …stuffbefore /nix/store/h680k7k53rjl9p15g6h7kpym33250w0y-demo-baz andafter…
             → /nix/store/h680k7k53rjl9p15g6h7kpym33250w0y-demo-baz
             └───share/snenskek: …???? /nix/store/dm24c76p9y2mrvmwgpmi64rryw6x5qmm-demo-foo …
                 → /nix/store/dm24c76p9y2mrvmwgpmi64rryw6x5qmm-demo-foo
                 └───bin/alarm: …texttexttext/nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar abcabcabc…
                     → /nix/store/3lrgm74j85nzpnkz127rkwbx3fz5320q-demo-bar
    

    Many thanks to Ma27 .

  • disallowedRequisites now reports chains of disallowed requisites

    Errors now include the full chain of references leading to each forbidden path rather than only the immediate offender.

    Example:

    $ nix-build -A hello
    error: output '/nix/store/0b7k85gg5r28gb54px9nq7iv5986mns9-hello-2.12.2' is not allowed to refer to the following paths:
        /nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.40-66
        Shown below are chains that lead to the forbidden path(s).
        /nix/store/0b7k85gg5r28gb54px9nq7iv5986mns9-hello-2.12.2
        └───/nix/store/eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee-glibc-2.40-66
    

    Many thanks to Ma27 and Robert Hensing .

Performance

  • Substituter query speed

    Substituter queries now take advantage of asynchronous work introduced in 2.93. Caches with higher latency benefit more from these improvements.

    Nonetheless, it was measured to result in roughly a 60 % reduction in query time for medium-sized closures like NixOS systems.

    Many thanks to eldritch horrors .

  • Rate limiting nix copy parallel operations

    nix copy previously hit Too many open files errors on main. We added a rate limit to avoid that. You can remove the limit by increasing the open files limit using ulimit -n <new number> .

    Many thanks to Raito Bezarius .

  • Pointer tagging, thunk state sharing and unreferenced Values

    Lix implemented pointer tagging and reduced the Value structure to a single machine word. Thunk state sharing was implemented, enabling more reuse of Value objects. Value is now used as a reference-counted smart pointer to a heap object.

    This unblocks further optimizations and resulted in:

    • 15 % memory savings and a 3 % evaluation time regression on system rebuild
    • 17 % memory savings and a 7 % evaluation time improvement on nix search

    Many thanks to eldritch horrors .

  • Reuse of special strings in the language

    Common strings that occurs in evaluation such as the result of builtins.attrNames are now reused more efficiently, reducing allocations and slightly improving evaluation speed.

    Up to 11 % memory savings were observed in large NixOS deployments with a slight decrease in CPU usage.

    Many thanks to eldritch horrors , xokdvium , Raito Bezarius , Tom Hubrecht and NaN-git for this.

  • Parallel marking in garbage collection

    The Boehm-Demers-Weiser garbage collector supports parallel marking. Lix now enables this feature.

    Contrary to the release notes, we did not observe a 38 % improvement on nix search.

    Many thanks to Eelco Dolstra and Seth Flynn for this.

Fixes

  • build-dir no longer defaults to temp-dir

    Temporary build directories no longer default to temp-dir (typically /tmp ), fixing CVE-2025-46415.

    Many users use a tmpfs for /tmp . The default build directory is now /nix/var/nix/builds . If you care about tmpfs semantics, bind-mount that directory onto a tmpfs.

    Compared to 2.93.3, additional changes were made so Darwin handles the new path length correctly, which allows reasonable derivations to connect to UNIX domain sockets in sandboxes. This will also be shipped in 2.93.4.

    Many thanks to eldritch horrors , Emily and Qyriad for this.

  • Global CA are copied inside the builder environment

    Global CA bundles are now correctly propagated to the builder environment. This fixes TLS verification issues for fetchers relying on system CAs.

    This adds about 500 KB per fixed-output build in scratch locations.

    Many thanks to Raito Bezarius and Emily for this.

  • Exponential backoff for downloads

    In 2.93, we lowered the connect-timeout to 5 seconds. Some users have DNS setups where the first nameserver times out, causing resolution to exceed 5 seconds.

    We replaced linear backoff with exponential backoff to handle these cases more robustly.

    Many thanks to Ma27 for this.

  • nix develop for derivations that reject dependencies with structured attributes

    Recent work in nixpkgs for bash-less systems added many outputChecks.<output>.disallowedRequisites definitions to packages such as systemd.

    This caused nix develop nixpkgs#systemd to stop working, which is a common workflow for systemd development.

    The root cause was that nix develop only understood non-structured output checks.

    Lix 2.94.0 adds support for structured output checks, so nix develop nixpkgs#systemd works again.

    nix-shell is unaffected by this problem.

    Many thanks to Raito Bezarius for this.

  • “My shell didn’t work” — nix-shell default shell directory is not /tmp anymore

    Historically, nix-shell stored internal shell files in $TMPDIR or /tmp and also used it for $NIX_BUILD_TOP . Many users have $TMPDIR unset, so /tmp was consistently used.

    If you ran sudo nix-shell and exited uncleanly, you could create /tmp/env-vars with root permissions, causing all subsequent shells for unprivileged users to fail silently. The workaround was to delete the file manually.

    Lix now creates a dedicated temporary directory for shell metadata that does not collide with other shells. Cleanup is handled by Lix itself after the shell exits.

    Many thanks to Raito Bezarius for this.

  • Remove reliance on bash for remote stores via SSH

    Lix 2.93 changed SSH remote store handling in a way that broke classical ForceCommand and similar directives. We reverted the problematic part in 2.93.1 and carry the same fix here.

    Previous configurations work again out of the box.

    Many thanks to Raito Bezarius for this.

You can read the full changelog in the manual .

Known issues

The release notes may contain imprecisions and typos, we are working to correct this without doing a point release.

No impactful known issues are yet known!

Credits

Thanks, as always, to the following groups:

  • The large community who beta tested the upcoming release by running main in production since the 2.94 branch-off. We really appreciate having immediate feedback on our work and the trust of running main alongside us means a lot to us. We know we tested the patience of some of you, we thank you for that.

    If you want to run Lix main yourself, see the beta guide for details.

  • Everyone who contributed by filing bugs and giving us feedback on Matrix.

  • All the first time contributors who made their first contributions to a Nix implementation in Lix. We are eternally grateful to everyone who helped us out on the numerous important but tedious issues.

  • All the contributors who have helped us with the backlog of bugs.

  • The CppNix contributors and CppNix team, without whom we would not have this software, and who wrote some of the improvements ported into this release.

A quiet but heartfelt note of gratitude goes to eldritch horrors for their steady guidance throughout this release, even in the face of its many challenges.

Onwards and upwards for the next release. We look forward to continuing to work together with everyone to build a better foundation for the evolution of Nix.

Tycoon 2FA and the Collapse of Legacy MFA

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 15:01:11
Tycoon 2FA enables turnkey real-time MFA relays behind 64,000+ attacks this year, proving legacy MFA collapses the moment a phishing kit targets it. Learn from Token Ring how biometric, phishing-proof FIDO2 hardware blocks these relay attacks before they succeed. [...]...
Original Article

Phishing conveyor belt

The rise of the Tycoon 2FA phishing kit should serve as a global warning siren for every enterprise. This is not a tool for elite hackers. This is a turnkey kit that anyone with a browser can use to bypass the very MFA and auth apps companies depend on. And it is being used at scale.

Over 64,000 attacks have already been tracked this year, many targeting Microsoft 365 and Gmail because those platforms represent the easiest, fastest path into an enterprise.

Phishing as a Service, No Skill Required

Tycoon 2FA’s power comes from removing the need for technical skill. It is Phishing as a Service, fully packaged, polished, and automated. A teenager who cannot write a line of code can deploy it. The kit walks the operator through setup. It provides fake login pages. It spins up reverse proxy servers.

It does all the heavy lifting. The attacker simply sends a link to hundreds of your employees and waits for one to bite.

Real-Time MFA Relay and Total Session Takeover

Once the victim clicks, Tycoon 2FA does the rest. It intercepts usernames and passwords in real time. It captures session cookies. It proxies the MFA flow directly to Microsoft or Google. The victim thinks they are simply passing a security check, but they are authenticating the attacker.

This is the terrifying part. Even well-trained users fall for this because everything looks pixel perfect identical. The pages are dynamic, pulling live responses from legitimate servers.

If Microsoft says enter your code, the page updates instantly. If Google sends a prompt, it appears exactly as expected. There is no visible difference. There is no clue. And there is no way for any legacy MFA or authenticator app to stop it because Tycoon is man in the middle by design.

Built to Evade Detection

It gets worse. Tycoon 2FA includes anti detection layers that rival commercial malware strains. Base64 encoding. LZ string compression. DOM vanishing. CryptoJS obfuscation. Automated bot filtering. CAPTCHA challenges. Debugger checks.

The kit hides itself from scanners and researchers. It only reveals its true behavior when a human target arrives. And once it completes the authentication relay, the attacker gets full session access inside Microsoft 365 or Gmail.

From there they move laterally into SharePoint, OneDrive, email, Teams, HR systems, finance systems. One successful phish creates total compromise.

CISO Guide: Stopping Ransomware with Next-Gen MFA

The ebook “CISO Guide: Stopping Ransomware with Next-Gen MFA” explores how ransomware attacks are evolving and why legacy MFA can’t keep up.

This essential guide reveals the real-world impact of phishing-resistant MFA, how it stops ransomware before damage is done, and why CISOs are making the switch to biometric phishing proof identity.

Read the CISO Guide

Legacy MFA Has Already Collapsed

This is why legacy MFA has collapsed. You just rolling that out makes your company a honeypot. SMS codes. Push notifications. TOTP apps. All share the same flaw. They rely on user behavior. They depend on the hope that a user notices something is wrong.

They offer attackers shared secrets that can be intercepted, forwarded, or replayed. Tycoon 2FA and dozens of similar kits exploit exactly that. They turn the user into the attack vector. Even passkeys are proving vulnerable when synced through cloud accounts or when fallback recovery paths exist that can be socially engineered.

Attackers understand this completely. Criminal groups like Scattered Spider, Octo Tempest, and Storm 1167 are using these kits daily. It is the fastest growing attack method in the world because it is easy, scalable, and requires no technical sophistication.

Companies are rolling out MFA and authenticator apps only to find out these systems collapse the moment a phishing kit decides to target them. The truth is simple. If someone can trick your employee into entering a code or approving a prompt, the attacker wins. And Tycoon does exactly that.

The Path Forward: Phishing-Proof MFA

But there is a path forward and it is fast and easy to roll out. Biometric phishing proof identity built on FIDO2 hardware. Authentication that is proximity based, domain bound, and impossible to relay or spoof. A system where there are no codes to enter, no prompts to approve, no shared secrets to intercept, and no way to trick the user into helping the attacker.

A system that rejects fake websites automatically. A system that forces a live biometric fingerprint match on a physical device that must be near the computer being logged into.

This changes everything because it removes the user from the decision tree. Instead of hoping someone recognizes a fake login page, the authenticator itself checks the origin cryptographically.

Instead of hoping someone refuses a malicious push request, the authenticator never receives a push request at all. Instead of asking people to be perfect, the system verifies identity with hardware, not judgment.

The Token Model

This is the model behind Token Ring and Token BioStick . Phishing proof by architecture. Biometric by requirement. Proximity based by default. Domain bound by cryptography.

There is no code to steal. There is no approval to trick. There is no recovery flow for a scammer to exploit. Even if a user clicks the wrong link. Even if a user hands over a password (if they even have one). Even if a social engineer calls pretending to be IT. The authentication simply fails because the domain does not match and the fingerprint is not present.

Tycoon 2FA hits a wall. The relay breaks. The attack dies instantly. And these solutions are inexpensive and available today.

Enterprises using these devices report something important. Employees comply easily with this passwordless wireless solution. Authentication is fast (2 seconds). There is nothing to remember. Nothing to type. Nothing to approve. It is a better user experience and a vastly stronger security posture.

When identity is bound to a physical biometric device that enforces origin checks and proximity requirements, phishing kits become irrelevant.

The Reality Every Enterprise Must Face

This is the moment every enterprise must accept. The attackers have evolved and the defenses must evolve too. Legacy MFA cannot survive this threat. Authenticator apps cannot survive this threat. Passkeys struggle under it. Tycoon 2FA proves that any system asking users to enter or approve anything can be defeated in seconds.

Here is the truth in plain language. If your MFA can be fooled by a fake website, it is already compromised. If your authentication can be relayed, it will be. If your system depends on user judgment, it will fail. Biometric hardware based identity that is phishing proof, proximity bound, and domain locked is the only way forward.

The criminals have upgraded. Now it is your turn. Upgrade your identity layer before Tycoon or its successors make you the next headline.

Token products are now available online: https://store.tokenring.com

Sponsored and written by Token .

The Tycoon 2FA Phishing Platform and the Collapse of Legacy MFA

Bleeping Computer
www.bleepingcomputer.com
2025-11-18 15:01:11
Tycoon 2FA enables turnkey real-time MFA relays behind 64,000+ attacks this year, proving legacy MFA collapses the moment a phishing kit targets it. Learn from Token Ring how biometric, phishing-proof FIDO2 hardware blocks these relay attacks before they succeed. [...]...
Original Article

Phishing conveyor belt

The rise of the Tycoon 2FA phishing kit should serve as a global warning siren for every enterprise. This is not a tool for elite hackers. This is a turnkey kit that anyone with a browser can use to bypass the very MFA and auth apps companies depend on. And it is being used at scale.

Over 64,000 attacks have already been tracked this year, many targeting Microsoft 365 and Gmail because those platforms represent the easiest, fastest path into an enterprise.

Phishing as a Service, No Skill Required

Tycoon 2FA’s power comes from removing the need for technical skill. It is Phishing as a Service, fully packaged, polished, and automated. A teenager who cannot write a line of code can deploy it. The kit walks the operator through setup. It provides fake login pages. It spins up reverse proxy servers.

It does all the heavy lifting. The attacker simply sends a link to hundreds of your employees and waits for one to bite.

Real-Time MFA Relay and Total Session Takeover

Once the victim clicks, Tycoon 2FA does the rest. It intercepts usernames and passwords in real time. It captures session cookies. It proxies the MFA flow directly to Microsoft or Google. The victim thinks they are simply passing a security check, but they are authenticating the attacker.

This is the terrifying part. Even well-trained users fall for this because everything looks pixel perfect identical. The pages are dynamic, pulling live responses from legitimate servers.

If Microsoft says enter your code, the page updates instantly. If Google sends a prompt, it appears exactly as expected. There is no visible difference. There is no clue. And there is no way for any legacy MFA or authenticator app to stop it because Tycoon is man in the middle by design.

Built to Evade Detection

It gets worse. Tycoon 2FA includes anti detection layers that rival commercial malware strains. Base64 encoding. LZ string compression. DOM vanishing. CryptoJS obfuscation. Automated bot filtering. CAPTCHA challenges. Debugger checks.

The kit hides itself from scanners and researchers. It only reveals its true behavior when a human target arrives. And once it completes the authentication relay, the attacker gets full session access inside Microsoft 365 or Gmail.

From there they move laterally into SharePoint, OneDrive, email, Teams, HR systems, finance systems. One successful phish creates total compromise.

CISO Guide: Stopping Ransomware with Next-Gen MFA

The ebook “CISO Guide: Stopping Ransomware with Next-Gen MFA” explores how ransomware attacks are evolving and why legacy MFA can’t keep up.

This essential guide reveals the real-world impact of phishing-resistant MFA, how it stops ransomware before damage is done, and why CISOs are making the switch to biometric phishing proof identity.

Read the CISO Guide

Legacy MFA Has Already Collapsed

This is why legacy MFA has collapsed. You just rolling that out makes your company a honeypot. SMS codes. Push notifications. TOTP apps. All share the same flaw. They rely on user behavior. They depend on the hope that a user notices something is wrong.

They offer attackers shared secrets that can be intercepted, forwarded, or replayed. Tycoon 2FA and dozens of similar kits exploit exactly that. They turn the user into the attack vector. Even passkeys are proving vulnerable when synced through cloud accounts or when fallback recovery paths exist that can be socially engineered.

Attackers understand this completely. Criminal groups like Scattered Spider, Octo Tempest, and Storm 1167 are using these kits daily. It is the fastest growing attack method in the world because it is easy, scalable, and requires no technical sophistication.

Companies are rolling out MFA and authenticator apps only to find out these systems collapse the moment a phishing kit decides to target them. The truth is simple. If someone can trick your employee into entering a code or approving a prompt, the attacker wins. And Tycoon does exactly that.

The Path Forward: Phishing-Proof MFA

But there is a path forward and it is fast and easy to roll out. Biometric phishing proof identity built on FIDO2 hardware. Authentication that is proximity based, domain bound, and impossible to relay or spoof. A system where there are no codes to enter, no prompts to approve, no shared secrets to intercept, and no way to trick the user into helping the attacker.

A system that rejects fake websites automatically. A system that forces a live biometric fingerprint match on a physical device that must be near the computer being logged into.

This changes everything because it removes the user from the decision tree. Instead of hoping someone recognizes a fake login page, the authenticator itself checks the origin cryptographically.

Instead of hoping someone refuses a malicious push request, the authenticator never receives a push request at all. Instead of asking people to be perfect, the system verifies identity with hardware, not judgment.

The Token Model

This is the model behind Token Ring and Token BioStick . Phishing proof by architecture. Biometric by requirement. Proximity based by default. Domain bound by cryptography.

There is no code to steal. There is no approval to trick. There is no recovery flow for a scammer to exploit. Even if a user clicks the wrong link. Even if a user hands over a password (if they even have one). Even if a social engineer calls pretending to be IT. The authentication simply fails because the domain does not match and the fingerprint is not present.

Tycoon 2FA hits a wall. The relay breaks. The attack dies instantly. And these solutions are inexpensive and available today.

Enterprises using these devices report something important. Employees comply easily with this passwordless wireless solution. Authentication is fast (2 seconds). There is nothing to remember. Nothing to type. Nothing to approve. It is a better user experience and a vastly stronger security posture.

When identity is bound to a physical biometric device that enforces origin checks and proximity requirements, phishing kits become irrelevant.

The Reality Every Enterprise Must Face

This is the moment every enterprise must accept. The attackers have evolved and the defenses must evolve too. Legacy MFA cannot survive this threat. Authenticator apps cannot survive this threat. Passkeys struggle under it. Tycoon 2FA proves that any system asking users to enter or approve anything can be defeated in seconds.

Here is the truth in plain language. If your MFA can be fooled by a fake website, it is already compromised. If your authentication can be relayed, it will be. If your system depends on user judgment, it will fail. Biometric hardware based identity that is phishing proof, proximity bound, and domain locked is the only way forward.

The criminals have upgraded. Now it is your turn. Upgrade your identity layer before Tycoon or its successors make you the next headline.

Token products are now available online: https://store.tokenring.com

Sponsored and written by Token .

Show HN: Browser-based interactive 3D Three-Body problem simulator

Hacker News
trisolarchaos.com
2025-11-18 15:00:36
Comments...
Original Article

About the N-Body Simulator

What is the Three-Body Problem?

The three-body problem is one of the most famous challenges in classical physics and celestial mechanics. It asks: given the initial positions, masses, and velocities of three bodies in space, can we predict their future motion under mutual gravitational attraction?

Unlike the two-body problem (which has an exact analytical solution), the three-body problem has no general closed-form solution . This makes numerical simulation the primary tool for studying these complex gravitational systems.

N-Body Gravitational Simulation

This simulator uses Newton's law of universal gravitation to model the gravitational forces between every pair of bodies:

F = G × m₁ × m₂ / (r² + ε²)

Each body experiences the sum of all pairwise gravitational forces from every other body. For N bodies, this requires calculating N(N-1)/2 force pairs each timestep. The ε² term is a softening parameter that prevents numerical singularities when bodies pass very close together.

The simulation supports multiple integration methods. By default, it uses the Velocity Verlet integration method , a symplectic integrator that provides superior energy conservation compared to simpler methods like Euler integration. This makes it ideal for long-term orbital mechanics simulations.

Users can switch to the 4th-order Runge-Kutta (RK4) method in the Advanced Settings, which offers higher accuracy per timestep and typically shows lower energy drift in short simulations. However, RK4 is not symplectic and accumulates systematic phase errors over long simulation times, causing orbits to gradually decay or expand. This makes RK4 better suited for short to medium duration simulations where minimizing instantaneous error is the priority, while Verlet excels at maintaining correct orbital shapes over extended periods.

Preset Configurations

The simulator includes several famous periodic three-body orbits discovered through numerical searches:

N-Body Presets Showcase

2D Orbits

  • Figure-8 choreography: Discovered by Cris Moore in 1993, where three equal masses chase each other along a figure-eight shaped path
  • Lagrange triangular configuration: Equilateral triangle configuration with circular orbits.
  • Butterfly, Broucke, Hénon, and Yarn: Periodic orbits from the Šuvakov-Dmitrašinović database of three-body choreographies, discovered through systematic numerical exploration of initial conditions

3D Orbits

Three-dimensional periodic orbits from Li and Liao (2025), which discovered 10,059 new periodic solutions including 21 choreographic orbits and 273 "piano-trio" orbits (where two equal-mass bodies share one orbit while a third body follows another). Paper | GitHub

Features & Applications

  • Real-time Physics: Experience gravitational dynamics in 3D with interactive controls
  • Multiple Integration Methods: Choose between Velocity Verlet (energy-conserving) and RK4 (high accuracy).
  • Exploration Platform: Experiment with different initial conditions and masses
  • Timeline Playback: Scrub through simulation history to analyze orbital behavior

How to Use

Getting Started: Use the preset configurations (Figure-8 or Lagrange) to see stable three-body orbits, or generate random initial conditions to explore chaotic dynamics.

Controls: Adjust body masses, simulation speed, and physics parameters. Use the timeline to review and analyze orbital patterns. Drag bodies while paused to create custom configurations.

Sharing: Click "Share Configuration" to generate a URL that preserves your exact simulation initial state.

Energy Conservation & Simulation Accuracy

The simulator displays two important energy metrics in the Advanced Settings panel:

  • Total Energy: The sum of kinetic energy (½mv²) and gravitational potential energy (-Gm₁m₂/r) of all bodies. In an ideal gravitational system, this value should remain constant over time.
  • Energy Drift: The percentage change in total energy from the initial state. This measures the numerical accuracy of the simulation.

In real physics, energy is conserved in isolated systems. However, numerical integration methods introduce small errors at each timestep. The energy drift indicator helps you evaluate simulation quality:

  • Green (<1%): Excellent energy conservation - the simulation is highly accurate
  • Yellow (1-5%): Moderate drift - acceptable for most purposes but consider reducing timestep
  • Red (>5%): Significant drift - simulation may be unreliable, reduce timestep or try other integration methods

The Velocity Verlet integration method is "symplectic," meaning it preserves the phase-space structure of Hamiltonian systems. While RK4 typically shows lower energy drift in short-term simulations (better local accuracy), Verlet prevents systematic phase errors that accumulate over extended simulations. This makes Verlet ideal for long-term orbital mechanics where maintaining orbital stability over thousands of periods is more important than minimizing instantaneous error.

Why is Total Energy Negative? In gravitational systems, total energy is often negative, and this is perfectly normal! Gravitational potential energy is defined as zero at infinite separation and becomes increasingly negative as bodies move closer together (PE = -Gm₁m₂/r). When total energy is negative, it means the system is gravitationally bound - the bodies don't have enough kinetic energy to escape to infinity, so they remain in orbit. This is exactly what you see in stable orbital systems like planets around stars or the choreographed orbits in this simulator. A negative total energy that remains constant indicates a stable, bound orbital system.

Technical Details

Built with Three.js for WebGL-accelerated 3D graphics and modern JavaScript. The physics engine implements N-body gravitational calculations with a configurable softening parameter to prevent numerical singularities during close encounters.

The simulator tracks up to 10,000 frames of history, allowing you to review and analyze the evolution of complex orbital systems. All simulations are deterministic and reproducible.

Feedback

Have suggestions, found a bug, or want to share your thoughts? Give feedback and help improve this simulator!

Two Weeks of Surveillance Footage From ICE Detention Center ‘Irretrievably Destroyed’

403 Media
www.404media.co
2025-11-18 14:58:52
"Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all."...
Original Article

The Department of Homeland Security claimed in court proceedings that nearly two weeks worth of surveillance footage from ICE’s Broadview Detention Center in suburban Chicago has been “irretrievably destroyed” and may not be able to be recovered, according to court records reviewed by 404 Media .

The filing was made as part of a class action lawsuit against the Department of Homeland Security by people being held at Broadview, which has become the site of widespread protests against ICE. The lawsuit says that people detained at the facility are being held in abhorrent, “inhumane” conditions. The complaint describes a facility where detainees are “confined at Broadview inside overcrowded holding cells containing dozens of people at a time. People are forced to attempt to sleep for days or sometimes weeks on plastic chairs or on the filthy concrete floor. They are denied sufficient food and water […] the temperatures are extreme and uncomfortable […] the physical conditions are filthy, with poor sanitation, clogged toilets, and blood, human fluids, and insects in the sinks and the floor […] federal officers who patrol Broadview under Defendants’ authority are abusive and cruel. Putative class members are routinely degraded, mistreated, and humiliated by these officers.”

As part of discovery in the case, the plaintiffs’ lawyers requested surveillance footage from the facility starting from mid September, which is when ICE stepped up its mass deportation campaign in Chicago. In a status report submitted by lawyers from both the plaintiffs and the Department of Homeland Security, lawyers said that nearly two weeks of footage has been “irretrievably destroyed.”

“Defendants have agreed to produce. Video from September 28, 2025 to October 19, 2025, and also from October 31, 2025 to November, 7 2025,” the filing states. “Defendants have indicated that some video between October 19, 2025 and October 31, 2025 has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all.” Law & Crime first reported on the filing .

1. Surveillance Video from Inside Broadview. In their Expedited Discovery Request No. 9, Plaintiffs request surveillance video from inside the Broadview facility captured by Defendants’ equipment for a limited set of days, starting in mid-September 2025. Plaintiffs also request current video on a weekly basis. Defendants have agreed to produce video from September 28, 2025, to October 19, 2025, and also from October 31, 2025, to November 7, 2025. Plaintiffs are providing Defendants with hard drives for this production, and the parties expect that this initial production will be made shortly. The parties are discussing ways to ease the burden of production of video going forward, including by having Plaintiffs select random days for production rather than the production of all video on an on-going basis. Defendants have indicated that some video between October 19, 2025, and October 31, 2025, has been irretrievably destroyed and therefore cannot be produced on an expedited basis or at all. Plaintiffs are in the process of hiring an IT contractor. Plaintiffs’ contractor will meet with the government’s ESI Liaison (with attorneys on the phone) to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved. While Plaintiffs intend to explore the issue of missing footage, Plaintiffs have communicated to
A screenshot from the court filing

The filing adds that the plaintiffs, who are being represented by lawyers from the American Civil Liberties Union of Illinois, the MacArthur Justice Center, and the Eimer Stahl law firm, hired an IT contractor to work with the government “to attempt to work through issues concerning the missing video, including whether any content is able to be retrieved.”

Surveillance footage from inside the detention center would presumably be critical in a case about the alleged abusive treatment of detainees and inhumane living conditions. The filing states that the plaintiffs' attorneys have “communicated to Defendants that they are most concerned with obtaining the available surveillance videos as quickly as possible.”

ICE did not respond to a request for comment from 404 Media. A spokesperson for the ACLU of Illinois told 404 Media “we don’t have any insight on this. Hoping DHS can explain.”

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

[$] Pouring packages with Homebrew

Linux Weekly News
lwn.net
2025-11-18 14:40:55
The Homebrew project is an open-source package-management system that comes with a repository of useful packages for Linux and macOS. Even though Linux distributions have their own package management and repositories, Homebrew is often used to obtain software that is not available in a distribution'...
Original Article

The page you have tried to view ( Pouring packages with Homebrew ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 4, 2025)