Standards for a Responsible AI Future: Reflections on the Seoul Statement

Internet Exchange
internet.exchangepoint.tech
2025-12-11 18:02:32
The statement comes at a time when principles of justice, dignity, and human rights are increasingly politicized, questioned, or treated as negotiable....
Original Article
internet governance

The statement comes at a time when principles of justice, dignity, and human rights are increasingly politicized, questioned, or treated as negotiable.

Standards for a Responsible AI Future: Reflections on the Seoul Statement
Photo by Nicole Avagliano / Unsplash

By Jacobo Castellanos, Coordinator of the Technology, Threats, and Opportunities team at WITNESS

On December 2, the ITU, ISO and IEC issued their Seoul Statement : a vision for AI standards that account for global contexts, rights impacts, and real-world harms.

The Seoul Statement includes four core commitments:

  • Integrating socio-technical perspectives into standards: ensuring AI standards address not just algorithms and data, but real-world impacts on people, societies, and the environment.
  • Embedding human rights and universal values: protecting dignity, privacy, fairness, and non-discrimination throughout AI design and governance.
  • Building an inclusive, multi-stakeholder community: enabling governments, industry, researchers, and civil society to shape global AI norms together.
  • Strengthening public–private collaboration and capacity-building: reducing global inequalities so all countries and communities can meaningfully benefit from AI.

This vision is not only welcome; it is a meaningful signal of hope.

It comes at a time when principles of justice, dignity, and human rights—once a shared reference point for international cooperation and for civil society’s engagement with governments and companies—are increasingly politicized, questioned, or treated as negotiable.

Why this matters

Standards, like regulation, form the structural base of the AI stack. By committing to explicitly consider human rights and real-world impact into standards development, the ITU, ISO, and IEC can help effectively steer AI’s impact toward protecting human rights, strengthening the information ecosystem, and fostering responsible innovation.

Human rights and civil society groups have been calling for this shift for years (see for example OHCHR’s latest report ). Standards alone won’t solve every AI concern, but they can create a pathway, together with regulation and tooling, that will lead towards rights protections and limiting misuse. At WITNESS, we work at the intersection of technology and human rights, and we have seen this firsthand in our work with the Coalition for Content Provenance and Authenticity (C2PA ), where a harm assessment continues to shape both the design of the standard and the ecosystem around it. By developing Content Credentials, a form of tamper-evident metadata that travels with an image, video, or audio file to show when, where, and how it was created or modified, C2PA offers a practical example of how standards can embed rights considerations from the ground up.

From Promise to Practice

While this vision is promising, a pressing question remains: How will these commitments be translated into action?

The Seoul Statement was presented during a two-day summit held in Seoul, but concrete plans for its implementation were not shared. Representatives from the ITU, ISO, and IEC did not publicly outline how this vision would be realized, and no details were provided regarding budgets, mechanisms, timelines, or accountability measures.

Standards work is inherently slow and resource-intensive. Incorporating socio-technical and human rights considerations adds another layer of complexity that requires significant investment in expertise, time and financial support. Without such investment, the Seoul Statement risks becoming a symbolic gesture rather than a meaningful turning point.

A notable concern was the limited presence of civil society at the Seoul summit. Multistakeholder participation was frequently mentioned, yet only a few human rights groups attended. Government and industry voices were far more visible, which is too narrow a basis for defining future AI norms. For the SDOs’ vision to carry real weight, civil society must be involved consistently, adequately resourced, and included from the beginning, not added in as an afterthought.

A Call to Stay Engaged

Still, there is reason for cautious optimism. The Seoul Statement represents an important first step, formally issued by institutions that will play a fundamental role in shaping the future of AI. By acknowledging that AI standards cannot be “just technical” and must be grounded in human rights and societal wellbeing, it creates a platform to push for meaningful change.

At WITNESS, we will continue to be actively involved in the C2PA, where we co-chair its Threats and Harms Task Force , and we will engage with the World Standards Cooperation’s AI and Multimedia Authenticity Standards Collaboration (ITU, IEC and ISO) as it positions AI standards as a powerful tool for regulation development and enforcement.

We call on civil society, researchers, regulators and funders to remain engaged, not only when milestones are announced, but through the long, technical, often opaque process of drafting, reviewing and implementing standards. We must also hold the ITU, ISO, and IEC accountable to their own vision, while working to extend this commitment to other national and international SDOs, and to the remaining building blocks that sit atop the foundations of regulation and standards in the AI ecosystem.


You can support independent bookstores and get great deals without lining the pockets of billionaires. Help support Internet Exchange by shopping our bookshop The Stack on Bookshop.org. The Stack curates books that connect the dots between code and culture, protocol and protest, theory and practice.


Support the Internet Exchange

If you find our emails useful, consider becoming a paid subscriber! You'll get access to our members-only Signal community where we share ideas, discuss upcoming topics, and exchange links. Paid subscribers can also leave comments on posts and enjoy a warm, fuzzy feeling.

Not ready for a long-term commitment? You can always leave us a tip .

Become A Paid Subscriber

Internet Governance

Digital Rights

Technology for Society

Careers and Funding Opportunities

Opportunities to Get Involved

What did we miss? Please send us a reply or write to editor@exchangepoint.tech .

‘If we build it, they will come’: Skövde, the tiny town powering up Sweden’s video game boom

Guardian
www.theguardian.com
2025-12-12 13:00:05
It started with a goat. Now – via a degree for developers and an incubator for startups – the tiny city is churning out world-famous video game hits. What is the secret of its success? On 26 March 2014, a trailer for a video game appeared on YouTube. The first thing the viewer sees is a closeup of a...
Original Article

O n 26 March 2014, a trailer for a video game appeared on YouTube. The first thing the viewer sees is a closeup of a goat lying on the ground, its tongue out, its eyes open. Behind it is a man on fire, running backwards in slow motion towards a house. Interspersed with these images is footage of the goat being repeatedly run over by a car. In the main shot, the goat, now appearing backwards as well, flies up into the first-floor window of a house, repairing the glass it smashed on its way down. It hurtles through another window and back to an exploding petrol station, where we assume its journey must have started.

This wordless, strangely moving video – a knowing parody of the trailer for a zombie survival game called Dead Island – was for a curious game called Goat Simulator. The game was, unsurprisingly, the first to ever put the player into the hooves of a goat, who must enact as much wanton destruction as possible. It was also the first massive hit to come out of a small city in Sweden by the name of Skövde.

There’s a good chance you have never heard of Skövde. There’s an even better chance you don’t know how to pronounce it (“hwevde”). Historically, it which is nestled between the two largest lakes in the country, Vänern and Vättern, has relied on Volvo for much of its employment. But, for the last 25 years, there has been a shift. Skövde has managed to produce some of the biggest and most talked-about video games on the planet – not just Goat Simulator but titles like V Rising, Valheim, and RV There Yet?.

In a city of 58,000, there are almost 1,000 people studying or making a living from video games there by comparison the entire gaming sector in the UK amounts to 28,500 people. How can Skövde punch so far above its weight?

I am sitting in an office in the university where a revolution took place. At the turn of the century, Skövde implemented something that would separate it from the surrounding cities of a nation already getting a head-start in the gaming world. In the late 1990s Ulf Wilhelmsson wanted to study for a PhD in video games in Sweden. Various universities, he says, told him: “You can’t study computer games, that’s just silly.” He went instead to the University of Copenhagen and had his work funded by the University of Skövde, whom he was working for at the time. In 2001, seeing a lack of students entering the university’s IT programmes, he proposed a video games development qualification. One of the things that made senior staff reluctant was that there were no game companies in Skövde. “I’m quite stubborn,” Wilhelmsson tells me at the university, “and I said: ‘If we build it, they will come.’”

Fangs very much … V Rising.
Fangs very much … V Rising. Photograph: Stunlock Studios

It was difficult at the outset, when the degree began in 2002. “Since we were among the first educational programmes that did this,” says Sanny Syberfeldt, the director of the design programme, “we had no guide or no model, so we have had to make them up as we went along.” The degree is now hugely popular, attracting multiple applicants per seat. “Our aim has never been to help students to fulfil the short-term needs of the games industry,” says Wilhelmsson. “It has always been to change the industry, create something that has not yet been done.”

His colleague Lissa Holloway-Attaway, who wears a pink multicoloured jumper with tigers on it, tackles gaming’s hinterland, asking the students to reflect on how gaming can intersect with subjects such as gender, identity and grief. One project involves them creating the prototype for a game that revolves around a historical environment or object.

Science Park Skövde, another crucial player in the city’s continued nurturing of game developers, is right next door to the university’s gaming department. Outwardly an unremarkable white building, inside it feels light and airy, with colourful chairs and jigsaw pieces dotted on the wall. The team at the Science Park run a three-year programme called Sweden Game Start-Up, which incubates teams looking to turn gaming into a viable career, helping them to find funding for their works-in-progress. One colleague says they “loan out self-confidence”. “The goal is for them to exit with a sustainable company that will hopefully live on after they leave the programme,” says Jennifer Granath, who works in communications at the Science Park.

Over fika – the Swedish term for a coffee-and-cake break, which in this case involves cinnamon buns – I meet around 30 of the developers in the incubation programme. They range in age from 22 to 45 and are incredibly warm and articulate. With a great deal of pride they show me their games in a big open room. There’s Home Sweet Gnome, in which you are a gnome who runs a bed and breakfast for creatures from folklore; horror golf game Club House on Haunted Hill; and Muri: Wild Woods, in which you are a mouse who goes on a cleaning adventure. Some of these games have been funded and released; some are still in development.

Billy no mates … Goat Simulator 3.
Billy no mates … Goat Simulator 3. Photograph: Coffee Stain Studios

It’s invaluable to be here, say the developers, 99% of whom studied at the university. One says that in Stockholm the game companies don’t care about graduates because there are so many of them; in Skövde, a city with 1/20th of its population, everyone knows everyone and scratches one another’s backs. “The size of this city is really to the community’s advantage,” says Louise Perrson, the head of the university’s game-writing programme. “If you come here with the thought of getting into the industry, you also come here knowing – or at least finding out – that you’ll be part of one big community.”

It’s significant that the three game studios that have helped to put Skövde on the map – Iron Gate, Coffee Stain and Stunlock – have all stayed in the city. Josefin Bertsson, a community manager at Iron Gate, says: “Without the incubator, the company most likely would not have existed.” Iron Gate’s premises have a sleek, lavish feel: lots of dark wood, plum-coloured sofas, a huge lighting fixture in the shape of antlers. Various swords are dotted about the place; and there is a large model of Sauron’s eye atop a black Lego tower.

The studio is most well known for making Valheim, a Viking survival game in which players are placed in a kind of purgatory and must attempt to ascend to Valhalla by proving themselves to Odin. Its preview version sold around 5m copies in its first five weeks. It may be Skövde’s most successful game. “I think that when you’re in a town that is this small but you have the amount of game developers that you have,” says Bertsson, “it’s easier to form a kind of game development community than in, say, Stockholm. It’s easier to congratulate your friends on something because they’re right next door.”

Coffee Stain, whom we have to thank for Goat Simulator, work out of an extraordinary space that was once a bank. (Studio manager Robert Lazic calls it a “bank palace”.) Over several floors there are features like a gym, a massage room, a board game room, and a huge wood-panelled meeting room full of fake trees. Lazic was part of the university’s first cohort of students – “at the fumbling beginnings”, as he puts it. The studio is now focusing on Satisfactory, its latest game, which puts players on an alien planet and tasks them with building factories and increasingly complex infrastructure. Success in Skövde breeds success, he says. Satisfactory has sold 5.5m copies.

Norse majeure … Valheim.
Norse majeure … Valheim. Photograph: Coffee Stain Studios

At Stunlock I meet Ulf Rickard Frisegård, the company’s CEO, and Tau Petersson, its PR and event manager. It’s shoes off at the door, as in many Swedish establishments. There are velvet teal curtains and board games in cabinets around the place. Stunlock created V Rising, a game in which the player embodies an awakened vampire and builds a castle for them, defeating bosses and swerving garlic along the way. V Rising sold more than 1m copies in its first week. Frisegård and Petersson were also students at the university and are in no doubt about the unique prestige of the city. When you are pitching yourself to people, “you jump through a lot of hoops telling them you’re from Skövde”, says Frisegård. Industry bigwigs make a beeline for it. Frisegård remembers an extremely powerful games industry figure, who he declines to name visiting their offices to take a look at V Rising: “He had a cab parked outside here all day – one person sitting and waiting for him – and then driving him, what is it, one kilometre to the train station.”

Nationally, Sweden is a towering force in the video games arena. It is the home of multi-billion-pound giants such as Minecraft and Candy Crush. In 2023 the revenue from Swedish game companies was more than £2.5bn. The country was quick to install high-speed internet and it made subsidised computers available to its population – perfect conditions for games design. I arrived, therefore, under the impression that Sweden’s status in the gaming world meant the national government was very supportive of the industry. Marcus Toftedahl, business coach in game development at the Science Park, says: “Weeelll … that is not true.” It’s a sore point. While the municipality of Skövde has been proud and supportive, the national government has not: “Sweden is lacking a national strategy and lacking a national support system for the games industry, even though we’re super-famous for our games all around the world.” This summer the Science Park went from receiving around £240,000 a year to £80,000 a year from the national government. There is a lack of understanding about game development, Toftedahl says, and the government has shifted towards more research-heavy areas such as AI.

Despite these worries Skövde continues to rightly blow its own trumpet about its gaming successes. But one of the priorities for people in the industry is to ensure that its locals know about them. “It’s well known outside Skövde – maybe not as known in Skövde – that we have this huge international industry that is really successful,,” says Theres Sahlström, the chair of the Skövde Municipal Executive Committee. “So we’re trying to bring attention to it.” She’s talking to me as we’re standing by the Walk of Game on the city’s cobbled high street – a newly created series of reminders of Skövde’s achievements in the gaming world.

When people ask Toftedahl if Skövde’s success is replicable in other places, he says that while the short answer is yes the long answer is less encouraging. “The smallness helps,” he says. But even other small Swedish cities haven’t been able to emulate Skövde. On the island of Gotland, for example, there have been university courses on gaming since 2002. But for Gotland, which has almost exactly the same population as Skövde, tourism is the main industry, so the region hasn’t directed as much support towards gaming. You could follow Skövde’s lead – ensure that your town taught video game development at its university; host events where game developers could showcase projects; organise networking events where people felt safe exchanging knowledge – and you would cultivate something special. But lightning simply might not strike twice.

Unswitching loops for fun and profit

Lobsters
xania.org
2025-12-12 12:58:09
Comments...
Original Article

Written by me, proof-read by an LLM.
Details at end.

Sometimes the compiler decides the best way to optimise your loop is to… write it twice. Sounds counterintuitive? Let’s change our sum example from before to optionally return a sum-of-squares 1 :

At -O2 the compiler turns the ternary into: sum += value * (squared ? value : 1); - using a multiply and add ( mla ) instruction to do the multiply and add, and conditionally picking either value or the constant 1 to avoid a branch inside the loop.

However, if we turn the optimisation level up, the compiler uses a new approach:

Here the compiler realises the bool squared value doesn’t change throughout the loop, and decides to duplicate the loop: one copy that squares each time unconditionally, and one where it never multiplies at all. This is called “loop unswitching”.

The check of squared is moved out of the loop, and the appropriate loop is then selected, either .LBB0_4 (non-squaring) or continuing to .LBB0_2 (the squaring version).

Each loop is perfectly optimised for its duties, and it’s as if you had written:

By duplicating the loop this way, the compiler makes sure that the multiplication doesn’t happen unless you specifically asked for it. In the -O2 code, the compiler bet on doing the multiply-and-add each time 2 even when it wasn’t needed. In the loop unswitching case, we do pay a code size penalty ( some code is duplicated, after all), and that’s why loop unswitching didn’t occur on the lower optimisation setting.

As always, it’s good to trust your compiler’s decisions, but know the kinds of trade-offs it’s making at various optimisation levels. You can always verify what it’s doing with Compiler Explorer , after all!

See the video that accompanies this post.


This post is day 12 of Advent of Compiler Optimisations 2025 , a 25-day series exploring how compilers transform our code.

This post was written by a human ( Matt Godbolt ) and reviewed and proof-read by LLMs and humans.

Support Compiler Explorer on Patreon or GitHub , or by buying CE products in the Compiler Explorer Shop .

Posted at 06:00:00 CST on 12 th December 2025.

How to break free from smart TV ads and tracking

Hacker News
arstechnica.com
2025-12-12 12:51:40
Comments...
Original Article

Skip to content

Sick of smart TVs? Here are your best options.

Credit: Aurich Lawson | Getty Images

Credit: Aurich Lawson | Getty Images

Smart TVs can feel like a dumb choice if you’re looking for privacy, reliability, and simplicity.

Today’s TVs and streaming sticks are usually loaded up with advertisements and user tracking , making offline TVs seem very attractive. But ever since smart TV operating systems began making money, “dumb” TVs have been hard to find.

In response, we created this non-smart TV guide that includes much more than dumb TVs. Since non-smart TVs are so rare, this guide also breaks down additional ways to watch TV and movies online and locally without dealing with smart TVs’ evolution toward software-centric features and snooping. We’ll discuss a range of options suitable for various budgets, different experience levels, and different rooms in your home.

Table of Contents

Our best recommendation

This is a dumb TV guide, but first, let’s briefly highlight the best recommendation for most people: Take your TV offline and plug in an Apple TV box.

The Apple TV 4K and Siri Remote.

Your best option.

Credit: Jeff Dunn

Your best option. Credit: Jeff Dunn

An Apple TV lets you replace smart TV software with Apple’s cleaner tvOS, and it’s more intuitive than using most smart TVs and other streaming devices. Apple’s tvOS usually runs faster and more reliably, and it isn’t riddled with distracting ads or recommendations. And there’s virtually no learning curve for family members or visitors, something that can’t always be said for DIY alternatives .

Critically, Apple TV boxes are also an easy recommendation on the privacy front. The setup process makes it simple for anyone to ensure that the device is using relatively minimal user tracking. You’re likely to use an Apple TV box with the Apple TV app or with an Apple account, which means sending some data to Apple. But Apple has a better reputation for keeping user information in-house, and Apple TV boxes don’t have automatic content recognition (ACR).

For more information, read my previous article on why Apple TVs are privacy advocates’ go-to streaming device .

Differing from other smart TV alternatives in this guide (such as a laptop), you don’t have to worry about various streaming services’ requirements for streaming in 4K or HDR with an Apple TV. But you still have to make sure your display and HDMI cable are HDCP 2.2-compliant and that you’re using HDMI 2.0 or better if you want to watch 4K or HDR content. You could even connect network-attached storage (NAS) to your Apple TV box so you can stream files from the storage device.

Plus, using a smart TV offline means you’ll have access to the latest and greatest display technologies, which is generally not the case for dumb TVs.

Things to keep in mind

One common concern about using smart TVs offline is the fear that the TV will repeatedly nag you to connect to the Internet. I’ve seen some reports of this happening over the years, but generally speaking, this doesn’t seem to be expected behavior. If you can’t find a way to disable TV notifications, try contacting support.

You may want your offline TV to keep LAN access so you can still use some smart TV features, like phone mirroring or streaming from a NAS. In this case, you can use your router (if supported) to block your TV’s IP address from connecting to the Internet.

And Google TV users should remember to set their TV to “ basic TV ” mode, which lets you use the TV without connecting to the Internet.

Dumb TVs are endangered

Buying a TV that doesn’t connect to the Internet is an obvious solution to avoiding smart TV tracking and ads, but that’s much easier said than done.

Smart TV OSes help TV-makers stay afloat in an industry with thin margins on hardware. Not only do they provide ad space, but they also give OS operators and their partners information on how people use their TVs—data that is extremely valuable to advertisers. Additionally, mainstream acceptance of the Internet of Things has led many people to expect their TVs to have integrated Wi-Fi. These factors have all made finding a dumb TV difficult, especially in the US.

Dumb TVs sold today have serious image and sound quality tradeoffs, simply because companies don’t make dumb versions of their high-end models. On the image side, you can expect lower resolutions, sizes, and brightness levels and poorer viewing angles. You also won’t find premium panel technologies like OLED . If you want premium image quality or sound, you’re better off using a smart TV offline. Dumb TVs also usually have shorter (one-year) warranties.

Any display or system you end up using needs HDCP 2.2 compliance to play 4K or HDR content via a streaming service or any other DRM-protected 4K or HDR media, like a Blu-ray disc.

Best ways to find a dumb TV

Below are the brands I’ve identified as most likely to have dumb TVs available for purchase online as of this writing.

Emerson

I was able to find the most non-smart TVs from Emerson. Emerson is a Parsippany, New Jersey, electronics company that was founded in 1948.

As of this writing, Emerson’s dumb TV options range from 7-inch portable models to 50-inch 4K TVs. Its TVs are relatively easy to get since they’re sold directly and through various online retailers, including Amazon, Home Depot, Best Buy, and, for some reason, Shein .

Westinghouse

Another company still pushing non-smart TVs is Westinghouse, a Pittsburgh-headquartered company founded in 1886. In addition to other types of electronics and home goods, Westinghouse also has an industrial business that includes nuclear fuel .

Westinghouse’s dumb TVs max out at 32 inches and 720p resolution, but some of them also have a built-in DVD player. You can find Westinghouse’s dumb TVs on Amazon . However, Westinghouse seems to have the most dubious reputation of these brands based on online chatter .

Sceptre

Sceptre, a Walmart brand, still has a handful of dumb TVs available. I’ve noticed inventory dwindle in recent months, but Walmart usually has at least one Sceptre dumb TV available .

Amazon search

Outside of the above brands, your best bet for finding a non-smart TV is Amazon. I’ve had success searching for “dumb TVs” and have found additional results by searching for a “non-smart TV.”

Projectors

For now, it’s not hard to find a projector that doesn’t connect to the Internet or track user activity. And there are options that are HDCP 2.2-compliant so you can project in 4K and HDR.

Things to keep in mind

Projectors aren’t for everyone. They still require dim rooms and a decent amount of physical space to produce the best image. (To see how much space you need for a projector, I recommend RTINGS’ handy throw calculator .)

The smart-tech bug has come for projectors, too, though, and we’ve started seeing more smart projectors release over the past two years.

Computer monitors

If you want a dumb display for watching TV, it’s cheaper to buy a smart TV and keep it offline than it is to get a similarly specced computer monitor. But there are benefits to using a monitor instead of a dumb TV or an offline smart TV. (Of course, this logic doesn’t carry over to “ smart monitors .”)

When it comes to smaller screens, you’ll have more options if you look at monitors instead of TVs. This is especially true if you want premium features, like high refresh rates or quality speakers, which are hard to find among TVs that are under 42 inches.

Monitor vendors are typically more forthcoming about product specs than TV makers are. It’s hard to find manufacturer claims about a TV’s color gamut, color accuracy, or typical brightness, but a computer monitor’s product page usually has all this information. It’s also easier to find a monitor with professional-grade color accuracy than a TV with the same, and some monitors have integrated calibration tools.

Things to keep in mind

Newer and advanced types of display technologies are rarer in monitors. This includes OLED, Mini LED, and Micro RGB . And if you buy a new monitor, you’ll probably need to supply your own speakers.

A computer monitor isn’t a TV, so there’s no TV tuner or way to use an antenna. If you really wanted to, you could get a cable box to work with a monitor with the right ports or adapters. People are streaming more than they’re watching broadcast and cable channels , though, so you may not mind the lack of traditional TV.

Digital signage

Digital signage displays are purpose-built for displaying corporate messages, often for all or most hours of the day. They typically have features that people don’t need for TV watching, such as content management software. And due to their durability and warranty needs, digital signage displays are often more expensive than similarly specced computer monitors.

Again, it’s important to ensure that the digital signage is HDCP 2.2-compliant if you plan on watching 4K or HDR.

Things to keep in mind

But if you happen to come across a digital signage display that’s the right size and the right price, is there any real reason why you shouldn’t use it as a TV? I asked Panasonic, which makes digital signage. A spokesperson from Panasonic Connect North America told me that digital signage displays are made to be on for 16-to-24 hours per day and with high brightness levels to accommodate “retail and public environments.”

The spokesperson added:

Their rugged construction and heat management systems make them ideal for demanding commercial use, but these same features can result in higher energy consumption, louder operation, and limited compatibility with home entertainment systems.

Panasonic’s representative also pointed out that real TVs offer consumer-friendly features for watching TV, like “home-optimized picture tuning, simplified audio integration, and user-friendly menu interfaces.”

If you’re fine with these caveats, though, and digital signage is your easiest option, there isn’t anything stopping you from using one to avoid smart TVs.

What to connect to your dumb TV

After you’ve settled on an offline display, you’ll need something to give it life. Below is a breakdown of the best things to plug into your dumb TV (or dumb display) so you can watch TV without your TV watching you.

Things to keep in mind

If you’re considering using an older device for TV, like a used laptop, make sure it’s HDCP 2.2-compliant if you want to watch 4K or HDR.

And although old systems and displays and single-board computers can make great dumb TV alternatives, remember that these devices need HDMI 2.0 or DisplayPort 1.2 or newer to support 4K at 60 Hz.

What to connect: a Phone

Before we get into more complex options for powering your dumb TV, let’s start with devices you may already own.

It’s possible to connect your phone to a dumb display, but doing so is harder than connecting a PC. You’d need an adapter, such as a USB-C (or Lightning ) Digital AV Adapter.

You can use a Bluetooth mouse and keyboard to control the phone from afar. By activating Assistive Touch, I’ve even been able to use my iPhone with a mouse that claims not to support iOS. With an extra-long cable, you could potentially control the phone from your lap. That’s not the cleanest setup, though, and it would look odd in a family room.

Things to keep in mind

If your phone is outputting to your display, you can’t use it to check your email, read articles, or doomscroll while you watch TV. You can fix this by using a secondary phone as your streaming device.

If you’re using a phone to watch a streaming service, there’s a good chance you won’t be watching in 4K, even if your streaming subscription supports it. Netflix, for example, limits resolution to 1080p or less (depending on the model) for iPhones. HDR is supported across iPhone models but not with Android devices.

Screen mirroring doesn’t always work well with streaming services and phones. Netflix, for instance, doesn’t support AirPlay or Android phone casting . Disney+ supports Chromecast and AirPlay , but AirPlay won’t work if you subscribe to Disney+ with ads (due to “ technical reasons “).

What to connect: A laptop

A laptop is an excellent smart TV alternative that’s highly customizable yet simple to deploy.

Most mainstream streaming providers that have dedicated smart TV apps, like Netflix and HBO Max, have PC versions of their apps. And most of those services are also available via web browsers, which work much better on computers than they do on smart TVs. You can also access local files—all via a user interface that you and anyone else watching TV is probably familiar with already.

With a tethered laptop, you can quickly set up a multi-picture view for watching two games or shows simultaneously. Multi-view support on streaming apps is extremely limited right now, with only Peacock and dedicated sports apps like ESPN and MLB TV offering it.

A laptop also lets you use your dumb TV for common PC tasks, like PC gaming or using productivity software (sometimes you just want to see that spreadsheet on a bigger screen).

Things to keep in mind

Streaming in 4K or HDR sometimes comes with specific requirements that are easy to overlook. Some streaming services, for example, won’t stream in 4K on certain web browsers—or with any web browser at all.

Streaming services sometimes have GPU requirements for 4K and HDR streaming. For example, to stream Netflix in 4K or HDR from a browser, you need Microsoft Edge and an Intel 7th Generation Core or AMD Ryzen CPU or better, plus the latest graphics drivers . Disney+ doesn’t allow 4K HDR streaming from any web browsers. Streaming 4K content in a web browser might also require you to acquire the HEVC/H.265 codec, depending on your system.

If 4K or HDR streaming is critical to you, it’s important to check your streaming providers’ 4K and HDR limits; it may be best to rely on a dedicated app.

If you want to be able to comfortably control your computer from a couch, you’ll also need to invest in some hardware or software. You can get away with a basic Bluetooth mouse and keyboard. Air mouses are another popular solution.

The WeChip W1 air mouse.

The WeChip W1 air mouse.

Credit: WeChip/Amazon

The WeChip W1 air mouse. Credit: WeChip/Amazon

If you don’t want extra gadgets taking up space, software like the popular Unified Remote (for iOS and Android) can turn your phone into a remote control for your computer. It also supports Wake-On-LAN.

You may encounter hiccups with streaming availability. Most streaming services available on smart TVs are also accessible via computers, but some aren’t. Many FAST (free ad-supported streaming television) services and channels, such as the Samsung TV Plus service and Filmrise FAST app and channel, are only available via smart TVs. And many streaming services’ apps, including Netflix and Disney+, aren’t available on macOS. If you’re using a very old computer, you might run into compatibility issues with streaming services. Netflix’s PC app, for example, requires Windows 10 or newer, and if you stream Netflix via a browser on a system running an older OS, you’re limited to SD resolution.

And while a laptop and dumb display setup can keep snooping TVs out of your home, there are obviously lots of user tracking and privacy concerns with web browsers, too. You can alleviate some concerns by researching the browsers you want to use for watching TV.

What to connect: A home theater PC

For a more permanent setup, consider a dedicated home theater PC (HTPC). They don’t require beefy, expensive specs and are more flexible than smart TV platforms in terms of software support and customization.

You can pick a system that fits on your living room console table, like a mini PC , or match your home’s aesthetics with a custom build . Raspberry Pis are a diminutive solution that you can dress up in a case and use for various additional tasks, like streaming games from your gaming PC to your TV or creating an AirPlay music server for streaming Spotify and other online music and local music to AirPlay-compatible speakers.

The right accessories can take an HTPC to the next level. You can use an app like TeamViewer or the more TV-remote-like Unified Remote to control your PC with your phone. But investing in dedicated hardware is worthwhile for long-term and multi-person use. Bluetooth keyboards and mice last a long time without needing a charge and can even be combined into one device .

K400 Plus Wireless Touch Keyboard

Logitech’s wireless K400 combines a keyboard with a touchpad.

Credit: Logitech

Logitech’s wireless K400 combines a keyboard with a touchpad. Credit: Logitech

Other popular options for HTPC control are air remotes and the Flirc USB , which plugs into a computer’s USB-A port to enable IR remote control. Speaking of USB ports, you could use them to connect a Blu-ray/DVD player or gaming controller to your HTPC. If you want to add support for live TV, you can still find PCIe over-the-air (OTA) tuner cards .

Pepper Jobs W10 GYRO Smart Remote

The Pepper Jobs W10 GYRO Smart Remote is a popular air remote for controlling Windows 10 PCs.

Credit: Pepper Jobs

The Pepper Jobs W10 GYRO Smart Remote is a popular air remote for controlling Windows 10 PCs. Credit: Pepper Jobs

Helpful software for home theater PCs

With the right software, an HTPC can be more useful to a household than a smart TV. You probably already have some apps in mind for your ideal HTPC. That makes this a fitting time to discuss some solid software that you may not have initially considered or that would be helpful to recommend to other cord cutters.

If you have a lot of media files you’d like to easily navigate through on your HTPC, media server software, such as Plex Media Server , is a lifesaver. Plex specifically has an app streamlined for HTPC use . The company has taken some criticism recently due to changes like new remote access rules , higher prices , and a foray into movie rentals . Although Plex is probably the most common and simplest media server software, alternatives like Jellyfin have been gaining popularity lately and are worth checking out.

Whichever media server software you use, consider pairing it with a dedicated NAS. NAS media servers are especially helpful if you want to let people, including those outside of your household, watch stuff from your media library at any time and without having to keep a high-power system turned on 24/7.

You can stream files from your NAS to a dumb TV by setting up a streaming system—such as a Raspberry Pi, Nvidia Shield, or Apple TV box —that connects to the dumb display. That device can then stream video from the NAS by using Network File System or the Infuse app, for example.

What to connect: An antenna

Nowadays, you can watch traditional, live TV channels over the Internet through over-the-top streaming services like YouTube TV and Sling TV. But don’t underestimate the power of TV antennas, which have improved in recent years and let you watch stuff for free.

This year, Horowitz Research surveyed 2,200 US adults and found that 19 percent of respondents were still using a TV antenna.

If you haven’t checked them out in a while, you might be surprised by how sleek bunny ears look now. Many of the best TV antennas now have flat, square shapes and can be mounted to your wall or windowsill.

Mohu's Leaf antenna.

Mohu’s Leaf antenna. Bye bye, bunny ears.

Mohu’s Leaf antenna. Bye bye, bunny ears. Credit: Mohu

The best part is that companies can’t track what you watch with an antenna. As Nielsen said in a January 2024 blog post:

Big data sources alone can’t provide insight into the viewing behaviors of the millions of viewers who watch TV using a digital antenna.

Antennas have also gotten more versatile. For example, in addition to local stations, an antenna can provide access to dozens of digital subchannels . They’re similar to the free ad-supported television channels gaining popularity with smart TVs users today, in that they often show niche programming or a steady stream of old shows and movies with commercial breaks. You can find a list of channels you’re likely to get with an antenna via this website from the Federal Communications Commission.

TV and movies watched through an antenna are likely to be less compressed than what you get with cable, which means you can get excellent image quality with the right setup.

You can also add DVR capabilities, like record and pause, to live broadcasts through hardware, such as a Tablo OTA DVR device or Plex DVR , a subscription service that lets antenna users add broadcast TV recordings to their Plex media servers.

A diagram of the 4th Gen Tablo's ports.

A diagram of the 4th Gen Tablo’s ports.

A diagram of the 4th Gen Tablo’s ports. Credit: Tablo

Things to keep in mind

You’re unlikely to get 4K or HDR broadcasts with an antenna. ATSC 3.0, also known as Next Gen TV, enables stations to broadcast in 4K HDR but has been rolling out slowly. Legislation recently proposed by the FCC could further slow things .

In order to watch a 4K or HDR broadcast, you’ll also need an ATSC 3.0 tuner or an ATSC 3.0-equipped TV. The latter is rare. LG, for example, dropped support in 2023 over a patent dispute . You can find a list of ATSC 3.0-certified TVs and converters here .

Realistically, an antenna doesn’t have enough channels to provide sufficient entertainment for many modern households. Sixty percent of antenna owners also subscribe to some sort of streaming service, according to Nielsen.

Further, obstructions , like tall buildings and power lines, could hurt an antenna’s performance. Another challenge is getting support for multiple TVs in your home. If you want OTA TV in multiple rooms, you either need to buy multiple antennas or set up a way to split the signal (such as by using an old coaxial cable and splitter, running a new coaxial cable, or using an OTA DVR, such as a Tablo or SiliconDust’s HDHomeRun ).

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

11 Comments

The Tor Project is switching to Rust

Hacker News
itsfoss.com
2025-12-12 12:35:57
Comments...
Original Article
Warp Terminal

The Tor Project has been busy with the rustification of their offering for quite some time now.

If you have used Tor Browser, you know what it does. Anonymous browsing through encrypted relay chains. The network itself has been running since the early 2000s. All of it is built on C .

But that C codebase is an issue . It is known to have buffer overflows, use-after-free bugs, and memory corruption vulnerabilities. That is why they introduced Arti , a Rust rewrite of Tor that tackles these flaws by leveraging the memory safety of the programming language.

A new release of Arti just dropped last week, so let's check it out!

Arti 1.8.0: What's New?

arti is written in a shade of light green in the center, with four circles surrounding it
Source: The Tor Project

We begin with the main highlight of this release, the rollout of the circuit timeout rework that was laid out in proposal 368 . Tor currently uses something called Circuit Dirty Timeout (CDT). It is a single timer that controls when your connection circuits become unavailable and when they close down.

Unfortunately, it is predictable. Someone monitoring traffic can spot these patterns and potentially track your activity. Arti 1.8.0 fixes this by implementing usage-based timeouts with separate timers. One handles when circuits accept new connections. Another closes idle circuits at random times instead of fixed intervals.

This should reduce the risk of fingerprinting from predictable timeout behavior.

Next up is the new experimental arti hsc ctor-migrate command that lets onion service operators migrate their restricted discovery keys from the C-based Tor to Arti's keystore.

These keys handle client authorization for onion services . The command transfers them over without requiring operators to do the manual legwork. The release also delivers improvements for routing architecture, protocol implementation, directory cache support, and OR port listener configuration.

You can go through the changelog to learn more about the Arti 1.8.0 release.

Via: Sam Bent

Suggested Read 📖 : Is Helium the Browser Brave Was Meant to Be ?

Is Helium the Browser Brave Was Meant to Be?

An in-depth look at ’another new Chromium-based web browser” that is “different from the other Chromium-based web browsers”.

It's FOSS Roland Taylor

About the author

Sourav Rudra

Sourav Rudra

A nerd with a passion for open source software, custom PC builds, motorsports, and exploring the endless possibilities of this world.

The Simple Habit That Saves My Evenings

Lobsters
alikhil.dev
2025-12-12 11:42:33
Comments...
Original Article

As a software engineer, I often work on big tasks that require hours of continuous and focused work. However, we have plenty of meetings, colleagues asking us something in Slack, and lunch breaks. Add a colleague who comes to you and calls you for a cup of coffee if you work from the office.

And usually, we don’t really have such a luxury as hours of uninterrupted time.

Nevertheless, sometimes we catch the flow of productive and focused work at the end of the workday. Imagine you come up with an elegant solution to a problem you’ve been tackling all day, or maybe even the whole past week. You can’t wait to implement and test your solution. And of course, you are so driven by your idea that you decide to continue working despite your working day being over. “20 minutes more and I will finish it,” you think. Obviously, this is not the case; some edge cases and new issues will inevitably arise. You come to your senses only 2–3 hours later—tired, hungry, demotivated, and still struggling with your problem. You just wasted your evening, with nothing to show for it. Worse, you overworked and didn’t recover that night. Thus, you were already exhausted when you started working.

Rick and morty 20 minutes adventure

You can imagine what will follow next. Nothing good, really.

Rick and morty 20 minutes adventure

I remember this happening back when I worked at a fast-growing startup, KazanExpress, while living in Innopolis . Our office was buzzing with energy, and the pace was intense—we often pushed ourselves late into the night. One evening, I felt like I had finally cracked a tricky part of our infrastructure. I thought, “Just 20 more minutes and I’ll finish it.” Of course, those 20 minutes stretched into well over three hours. By the time I left the office, I was tired, hungry, and frustrated—without any real progress to show. The next morning, walking back to the office, I realized how drained I already felt before even starting work. That was when it became clear to me: it’s better to stop, write down the next steps, and come back with a fresh head.

Of course, some might argue, “But you are considering only a negative scenario; one could really finish that job in 20 minutes and go home happy…”. Sure, but I think this risk is not worth it. Instead, I would suggest doing another thing.

Rather than trying to complete your task in 20 minutes, take this time to write down your thoughts, and a step-by-step action plan of what you think you need to do to finish your task. Then go home. Rest. A feeling of incompleteness will motivate you to come back and finalize your work the next day. Only you will be full of energy, together with a settled plan. No doubt you’ll accomplish your task before lunch.

Writing down the next steps helps to clear your mind after a workday. You write and forget about your work until the next morning.

As a bonus, there is a chance that new, better ideas will come while you sleep or rest.

I have been using this trick for more than 5 years now, and it helps me to keep my work and life balanced. Here are the two main ideas of it:

  • Don’t overwork
  • Write down the next steps before finishing your workday

New Windows RasMan zero-day flaw gets free, unofficial patches

Bleeping Computer
www.bleepingcomputer.com
2025-12-12 11:28:06
Free unofficial patches are available for a new Windows zero-day vulnerability that allows attackers to crash the Remote Access Connection Manager (RasMan) service. [...]...
Original Article

Windows

Free unofficial patches are available for a new Windows zero-day vulnerability that allows attackers to crash the Remote Access Connection Manager (RasMan) service.

RasMan is a critical Windows system service that starts automatically, runs in the background with SYSTEM-level privileges, and manages VPN, Point-to-Point Protocol over Ethernet (PPoE), and other remote network connections.

ACROS Security (which manages the 0patch micropatching platform) discovered a new denial-of-service (DoS) flaw while looking into CVE-2025-59230 , a Windows RasMan privilege escalation vulnerability exploited in attacks that was patched in October .

The DoS zero-day has not been assigned a CVE ID and remains unpatched across all Windows versions, including Windows 7 through Windows 11 and Windows Server 2008 R2 through Server 2025.

As the researchers found, when combined with CVE-2025-59230 (or similar elevation-of-privileges flaws), it allows attackers to execute code by impersonating the RasMan service. However, that attack only works when RasMan is not running.

The new flaw provides the missing puzzle piece, enabling threat actors to crash the service at will and opening the door to privilege escalation attacks that Microsoft thought it had closed.

Unprivileged users can exploit the zero-day to crash the RasMan service due to a coding error in how it processes circular linked lists. When the service encounters a null pointer while traversing a list, it attempts to read memory from that pointer rather than exiting the loop, causing a crash.

ACROS Security now provides free , unofficial security patches for this Windows RasMan zero-day via its 0Patch micropatching service for all affected Windows versions until Microsoft releases an official fix.

To install the micropatch on your device, you have to create an account and install the 0Patch agent. Once launched, the agent will automatically apply the micropatch without requiring a restart unless a custom patching policy blocks it.

"We alerted Microsoft about this issue; they will likely provide an official patch for still-supported Windows versions in one of future Windows updates," ACROS Security CEO Mitja Kolsek said today.

"As always, we included these 0day patches in our FREE plan until the original vendor has provided their official patch."

A Microsoft spokesperson was not immediately available for comment when contacted by BleepingComputer earlier today.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

“Trump Has Appointed Himself, Judge, Jury, and Executioner”

Intercept
theintercept.com
2025-12-12 11:00:00
The Trump administration is killing civilians in the Caribbean and Pacific and trying to suppress videos of boat strikes and press coverage. The post “Trump Has Appointed Himself, Judge, Jury, and Executioner”  appeared first on The Intercept....
Original Article

In September, The Intercept broke the story of the U.S. military ordering an additional strike on an alleged drug boat in the Caribbean.

Since then, U.S. boat strikes have expanded to the Pacific Ocean. The Intercept has documented 22 strikes as of early December that have killed at least 87 people . Alejandro Carranza Medina, a Colombian national, was one of the dozens of people killed in these strikes. His family says he was just out fishing for marlin and tuna when U.S. forces attacked his boat on September 15. On behalf of Medina’s family, attorney Dan Kovalik has filed a formal complaint with the Inter-American Commission on Human Rights.

“We’re bringing a petition alleging that the U.S. violated the American Declaration of the Rights and Duties of Man , in particular, the right to life, the right to due process, the right to trial, and we’re seeking compensation from the United States for the family of Alejandro Carranza, as well as injunctive relief, asking that the U.S. stop these bombings,” Kovalik told The Intercept.

In the midst of this massive scandal, the so-called Department of War is cracking down on journalists’ ability to cover U.S. military actions. Back in October, Secretary Pete Hegseth introduced major new restrictions on reporters covering the Pentagon. In order to maintain press credentials to enter the Pentagon, journalists would have to sign a 17-page pledge committing to the new rules limiting press corps reporting to explicitly authorized information, including a promise to not gather or seek information the department has not officially released.

This week on The Intercept Briefing, host Jessica Washington speaks to Kovalik about Medina’s case. Intercept senior reporter Nick Turse and Gregg Leslie, executive director of the First Amendment Clinic at Arizona State University Law, also join Washington to discuss the strikes off the coast of Latin America, subsequent attacks on shipwrecked survivors, and the administration’s response to reporting on U.S. forces and the Pentagon.

“Americans should be very concerned because President Trump has appointed himself, judge, jury, and executioner,” says Turse of the administration’s justification for targeting individuals it claims to be in a “ non-international armed conflict ” with. “He has a secret list of terrorist groups. He decided they’re at war with America. He decides if you’re a member of that group, if he says that you are, he says he has the right to kill you.”

Leslie raised concerns about the administration’s attempts to erase press freedoms. “It’s just that fundamental issue of, who gets to cover the government? Is it only government-sanctioned information that gets out to the people, or is it people working on behalf of the United States public who get to really hold people to account and dive deep for greater information? And all of that is being compromised, if there’s an administration that says, ‘We get to completely put a chokehold on any information that we don’t want to be released,’” says Leslie. “You just don’t have a free press if you have to pledge that you’re not going to give away information just because it hasn’t been cleared. It just shouldn’t work that way, and it hasn’t worked that way. And it’s frightening that we’ve gotten an administration trying to make that the norm.”

“What’s to stop a lawless president from killing people in America that he deems to be domestic terrorists?”

With a president who regularly targets journalists and critics, Turse adds, “What’s to stop a lawless president from killing people in America that he deems to be domestic terrorists? … These boat strikes, the murders of people convicted of no crimes, if they become accepted as normal. There’s really nothing to stop the president from launching such attacks within the United States.”

Listen to the full conversation of The Intercept Briefing on Apple Podcasts , Spotify , or wherever you listen.


Transcript

Jessica Washington: Welcome to The Intercept Briefing, I’m Jessica Washington.

Back in September, President Donald Trump made public that he and his administration had ordered a military strike on a boat in the Caribbean. On social media Trump claimed that members of Tren de Aragua, a Venezuelan gang, were transporting drugs on the vessel.

Reporter: And also the vote that you mentioned yesterday where 11 people were killed. What was found on that boat and why were the men killed instead of taken into custody?

Donald Trump: On the boat, you had massive amounts of drugs. We have tapes of them speaking. There was massive amounts of drugs coming into our country to kill a lot of people. And everybody fully understands that. In fact, you see it. You see the bags of drugs all over the boat and they were hit. Obviously, they won’t be doing it again.

JW: Since then, U.S. strikes targeting boats allegedly carrying drugs to the U.S. have expanded to the Pacific Ocean. The Intercept has counted 22 strikes as of early December. Those strikes have killed at least 87 people.

Members of Congress from both parties say these strikes are nothing short of extrajudicial killings targeting civilians that do not pose an eminent threat to the U.S. The administration has yet to provide the public any evidence that these boats are carrying drugs or affiliated with drug cartels, which the administration has also designated as “narco-terrorists.”

The family of one of those victims, Alejandro Carranza Medina, a Colombian national, says he was out fishing for marlin and tuna when a targeted strike on September 15 killed him. Attorney Daniel Kovalik has filed a human rights petition on behalf of his family. Kovalik filed the petition with the Inter-American Commission on Human Rights . And he joins me now.

Daniel Kovalik, welcome to The Intercept Briefing.

Daniel Kovalik: Thank you, Jessica. Thanks for having me.

JW: Daniel, I want to start with you telling us a little bit about Alejandro. Who was he?

DK: He was a fisherman. He was a father of four children, one adult child, three minor children. He was married, though he was separated at the time of his death.

He was close to his parents as well. And he was poor. They were a poor family and they relied on Alejandro to make ends meet through fishing. He was also, by the way, a member of the Fisherman’s Association in Santa Marta.

JW: What is known about the strike that killed Mr. Medina?

DK: It’s as much as we know about any of these strikes, he was out fishing for marlin and tuna and his boat was the victim of what the U.S. is calling a kinetic strike, which I think essentially means it was bombed and virtually obliterated. The president of the Fishermen’s Association recognized from the video that it was one of their fishermen association boats that Alejandro would normally use. And of course Alejandro never came back. That’s what we know about it.

JW: What is the complaint that you’re making?

DK: First of all, we’re bringing it against the United States as a state party to the organization of American States. They are subject to the jurisdiction of the Inter-American Commission on Human Rights, which is a body of the organization of American states. And we’re bringing a petition alleging that the U.S. violated the American Declaration of the Rights and Duties of Man, in particular, the right to life, the right to due process, the right to trial, and we’re seeking compensation from the United States for the family of Alejandro Carranza, as well as injunctive relief, asking that the U.S. stop these bombings.

JW: Can you tell me a little bit more about why you filed the petition with the Inter-American Commission on Human Rights and what your goal is here?

DK: Yes, so we felt that at least, at the moment it was the best place to get jurisdiction over the United States because the U.S. is a party to the American declaration, which by the way, I just note, is the oldest human rights instrument in the world. It was signed in Bogota in 1948. It’s also known as the Bogota Declaration. And the U.S. as I said, a petition can be brought against the U.S. as a country before the Inter-American Commission.

To get compensation from the United States and the U.S. Court is very difficult because of sovereign immunity issues. But the U.S. in this case, where the Inter-American Commission has agreed to, essentially, to waive those immunity issues. So we felt it was a good venue for us again, and we will be seeking compensation, as I said, and a finding that these killings are unlawful, and we hope that does play a role in ending these killings. That’s really a big goal.

And by the way, we have not foreclosed the possibility of a court case. We’re looking into that right now, as well.

JW: Can you tell us about the process of bringing the petition to the human rights commission and what’s coming down the pipeline in this case?

DK: Yeah it’ll be slow going for sure. But the commission will do their own investigation of the claims, which will include sending questions and queries to me, for example, about our case, but also to the United States. They will ask the U.S. to respond to the petition to give their petition on jurisdiction and on the merits to maybe give evidence. And so that those will be the next steps is an investigation of what happened here and why.

JW: Switching gears a bit. You were also hired by Colombian President, Gustavo Petro, who the Trump administration has sanctioned and accused of playing a “role in the global illicit drug trade.” What can you tell us about Petro’s Case?

DK: Yeah first of all, these claims of him trafficking the drugs are completely untrue.

I’ve known Gustavo Petro for 20 years. He’s been a fighter of the drug cartels through his whole political career, including when he was a senator in Colombia, and currently he’s also very active in fighting the drug trade. He’s bombed a number of drug labs. He has engaged in a lot of crop substitution programs, encouraging farmers to go from growing coca, the raw material for cocaine, to growing other agricultural products like food items, and that’s been very successful. He’s reclaimed a lot of land from coca production to again, legitimate crop production. He’s also engaged in interception of drug boats in the Caribbean, but he doesn’t kill people. He arrests people. He’s confiscated a lot of money, which he’s actually donated to Gaza.

So this is not a drug trafficker, but this is very politically motivated. It’s very clear, given the timing of all this, that the U.S. put him on the OFAC list to punish him. For one, being an advocate, a very outspoken advocate of Palestine. And for making it clear that he was against these bombings of the boats and also opposed to any intervention in Venezuela.

That’s what this OFAC list designation is really about.

JW: Petro has also spoken about making cocaine legal. Can you speak to that at all?

DK: Yeah there’s a lot of discussion about legalizing all drugs. You see in the U.S. that we now have virtually legalized marijuana in most places.

And I think that makes a lot of sense. The Rand Corporation did a study years ago that showed it’s 20 times more effective to deal with drug addiction at home than to try to destroy drugs at their source like in Colombia. The problem isn’t the drugs per se, but in the case of the United States, you have people who feel they need to be sedated most of the time. And instead of dealing with those underlying problems, of course all the social programs we have that might alleviate that need and desire are being cut, right? So there’s a lot of discussion about legalizing drugs so they could be better regulated and frankly, so they could be taxed so the sale could be taxed. You could gain revenue from those again, to deal with drug addiction and other social problems.

JW: Turning back to Mr. Medina’s case, I wanted to see if you had any final thoughts that you wanted to share.

DK: Just that, I’ve been asked by a few journalists, do you think he was innocent?

And do you know what my response is that I know that all of these people killed were innocent. You know why? Because where I come from, you’re innocent until proven guilty. None of these people were proven guilty in a court of law, and none of them were even charged, as far as I know, by the U.S. for a crime.

And by the way, even if they had been arrested, charged, tried, convicted, even in a death penalty state, they wouldn’t get the death penalty because drug trafficking is not a capital crime. So there’s nothing lawful about these. There’s no justification for what the U.S. is doing. And again, another journalist from CNN actually said how are you going to prove that Alejandro was innocent?

Again, I don’t have to prove he’s innocent. It’s the U.S. who had to prove he was guilty before meeting out punishment to him, and they never did. So those are the things I’d like people to keep in mind. The other thing is, if the U.S. can get away with this, if they can just murder people and that’s what it is, murder people based on mere allegations, then none of us are safe.

There’s no difference between what they’re doing in the Caribbean than if a cop went up to a guy on the street in America, in Chicago, for example, and said, “Oh, I think you’re dealing in drugs.” And he shot the guy in the head. There’s no difference. And that’s not a world we want to live in. And we’re starting to live in that world with the ICE detentions. So we’re fighting not only against specifically these killings or specifically for these families — we’re fighting for the rule of law that protects all of us — and people should welcome that no matter how they view the drug issue.

JW: Thank you, Dan, for bringing your insights about this case and about what happened to Alejandro to our audience. And thank you for taking the time to speak with me on the Intercept Briefing.

DK: Thank you. I’m a big fan of The Intercept. Support The Intercept people. Thank you very much. Appreciate you.

JW: Thank you.

Break

JW: Intercept Senior Reporter Nick Turse broke the story of the U.S. military launching a subsequent attack on survivors of a strike in the Caribbean Sea back in September. According to reporting from Turse, the survivors clung to the wreckage of the boat for roughly 45 minutes before being killed.

These strikes have horrified lawmakers on both sides of the aisles, including Republican Senator Rand Paul, who expressed his disgust with the attacks during a Fox Business Interview.

Rand Paul: It has not been the history of the United States to kill people who are out of combat. Even if there is a war, which most of us dispute, that a bunch of people who are unarmed allegedly running drugs is a war. We still don’t kill people when they’re incapacitated. People floating around in the water clinging to the wreckage of a ship are not in combat under any definition.

JW: Since the Trump administration launched its campaign targeting alleged “narco-terrorists” off the coast of Latin America, it has been laying the ground-work for a U.S. invasion of Venezuela without even the consent of Congress or again providing evidence for its claims.

Congress is now demanding the administration release unedited videos of the strikes to lawmakers or they will withhold a quarter of Defense Secretary Pete Hegseth’s travel budget.

And against this veil of secrecy and war crime allegations, the Pentagon has effectively replaced its seasoned press corps with a new crop of right-wing influencers, including Laura Loomer, James O’Keefe and Matt Gaetz, who claim to be covering the military, but have been accused of acting as a propaganda arm instead of a press corps.

Joining us now to discuss the boat strikes and the Trump administration’s attempts to eliminate critical coverage, are Intercept Senior Reporter Nick Turse and Gregg Leslie, executive director of the First Amendment Clinic at the Sandra Day O’Connor College of Law at Arizona State University.

Nick, Gregg, Welcome to the show.

Nick Turse: Thanks so much for having me.

Gregg Leslie: Thanks.

JW: Nick, to start, can you tell us about this first strike and why it matters that the United States launched an additional strike against the survivors?

NT: Sure. This initial attack took place in the Caribbean on September 2. The United States attacked what they say are “narco-terrorists,” what’s come to be known as a drug boat.

They fired a missile at this boat. The boat was reduced to wreckage. Basically all that was left was a portion of the hull floating upside down, and there were two survivors of the initial attack. They climbed aboard that piece of wreckage and they sat there for roughly 45 minutes, while they were under U.S. video surveillance.

At the end of that 45 minutes, the United States fired another missile, which killed those two survivors. And then in quick succession, they fired two more missiles in order to sink that last remnant of the vessel. There are a number of reasons why I think it’s notable that there was a follow-up strike here.

First off, there’s a lie by omission behind all of this, and by extension, a Pentagon coverup. The Intercept, as you say, was the first outlet to reveal that this double tap strike took place. And when we went to the Pentagon about it at the time all we got was an anodyne response. So it’s notable that they wanted to keep it secret in the first place.

We of course went ahead and published, but it took the Washington Post, the CNN, the New York Times months to catch up. The question becomes, why did the Pentagon want to keep it under wraps, and why didn’t they admit this when we first asked?

The Department of War says the U.S. military is in a “non-international armed conflict” with 20 plus gangs and cartels, whose identities it’s keeping secret. And if this is true, if we’re engaged in some sort of secret quasi war then a double tap strike to kill survivors is illegal under international law. In fact, the Pentagon’s own Law of War manual is clear on attacking defenseless people, combatants that are incapacitated by wounds, sickness, or very specifically shipwreck, are considered “Hors de combat,” the French term for those out of combat, or those out of the fight. At that point, combatants have become protected persons. They’re non-combatants at that point, so that’s another reason why this matters. There’s also something viscerally distasteful about killing people clinging to wreckage. It’s a summary execution of wounded, helpless people.

What’s worse is that the U.S. had the survivors under surveillance for 45 minutes and only then executed them. But, I also want to be clear that while the optics of this are especially horrendous, experts say that those follow up strikes aren’t materially different than the other drug boat attacks. There have been 22 attacks thus far by the U.S. on boats in the Caribbean and the eastern Pacific.

The U.S. has killed 87 people. And experts on the laws of war, former Pentagon lawyers, State Department lawyers who are experts, say that those are 87 extrajudicial killings, or put another way, 87 murders. There’s no war, there’s no actual armed conflict despite what the Trump administration claims. So these aren’t crimes of war. They can’t be, there’s no war. They’re just murders. The president and the military are conducting murders, and in my book, that’s what matters most.

JW: So the administration has tried to justify these strikes by claiming the men that were killed were narco terrorists. Since your initial reporting has the White House or the Pentagon provided any credible evidence that the people killed were drug traffickers?

NT: Yeah, they’ve never provided the public with any evidence of this. You’ll recall there was a strike on a semi-submersible craft that left two survivors that the military did not execute. They didn’t arrest them, they didn’t prosecute them. They instead repatriated them to their countries of origin after blowing up their boat and sinking it.

And the question is why? And I think it’s because they didn’t have viable evidence to prosecute. What they have when they target these boats is advanced intelligence, signals intelligence, maybe human intelligence that is informants, but they’re not going to disclose those sources and methods in court, so they don’t have a court case.

Now, I don’t know if everyone on board these boats are drug smugglers. It’s a question of what that even means. Is a poor fisherman moving cargo that Americans want, love, and pay big money for a smuggler? I don’t know, but I do believe these boats are transporting drugs. That’s what my sources say.

But that’s beyond the point because these aren’t capital offenses. If the offenders were arrested, tried, or convicted they’d get eight or 10 years in prison. They wouldn’t face a death penalty. Much less be convicted or executed.

Even more of a farce is the legal theory that’s been advanced in a still classified Justice Department finding. And it differs from some of what President Trump and the Pentagon has said in public statements about these killings of supposed narco terrorists. This classified finding says that the targets of the attacks are not the supposed narco traffickers. The people on board are, in bloodless military speak, “collateral damage.”

The government claims that the narcotics on the boats are the lawful military targets because their cargo generates revenue for the cartels, which the Trump administration claims they’re at war with. And the cartels could theoretically sell the drugs, take the money, and buy arms to engage in this non-existent war with America. So it’s a farce based on a fiction.

JW: Nick, you touched on this a little bit, but why should people in the United States care about the legality of these strikes? Are there implications for how the government could engage with people it considers even domestic adversaries?

NT: Yeah, I think Americans should be very concerned because President Trump has appointed himself, judge, jury, and executioner.

He has a secret list of terrorist groups. He decided they’re at war with America. He decides if you’re a member of that group, if he says that you are, he says he has the right to kill you. And Donald Trump doesn’t just have a list of foreign groups either, under National Security Presidential Memorandum seven, the shorthand is NSPM-7, which he issued this fall, he has a secret list of domestic terror groups or, it’s being compiled as we speak, I think. So what’s to stop a lawless president from killing people in America that he deems to be domestic terrorists? If he’s doing this, close to home in the Caribbean or the Pacific. It’s the illegal use of lethal force that should worry Americans.

These boat strikes, the murders of people convicted of no crimes, if they become accepted as normal. There’s really nothing to stop the president from launching such attacks within the United States.

JW: Yeah, that’s really terrifying Nick, and we appreciate you explaining to us what this expanded scope could mean.

And Gregg, I want to pivot a little bit. In the midst of everything that we’re discussing here, the Pentagon has effectively replaced its original press core with a group of right-wing influencers. Gregg, does that make uncovering the truth here more difficult?

GL: Yeah, it always does, and we see this from a lot of administrations to different degrees, but they all know that controlling the information can get them what they want in the short term. So it’s a reflexive reaction that almost always backfires because people know when they’re being lied to or when they’re having information withheld from them. So what we’re seeing at the Pentagon where, yeah, amateurs are basically the ones reporting to us now, it doesn’t go without notice, so it’s not a good solution. It’s a blatant, blatantly unconstitutional, denial of rights. They’re actually keeping people out of covering the Pentagon for the American people because they won’t sign a pledge restricting what they can report on. I think it’s an overwhelmingly improper way to handle a government.

JW: Gregg, I want to push a little bit and ask, we’ve obviously seen reporters outside of the building break stories. Nick is one example, but there are countless others. Does it matter for the Pentagon Press Corps to actually be inside of the Pentagon?

GL: I think it does, and it’s not just the Pentagon. I’ve seen this at other agencies too, where the U.S. government has an incredible array of experts on every topic, and people who are fundamentally involved in the controversies that we want to know more about. And any official channel of communication never really tells the full story. There’s always somebody who wants to limit that flow of information. So you can always get better information if you know who the people are behind the scenes. And there’s nothing nefarious or wrong with that. You just get better information to tell the American people how their government is operating. So that’s the way it should work. You don’t sit there and wait for press briefings. You go out and find the information, and you can do that better if you’re in the building.

JW: Yeah. Nick, I want to get your thoughts on this. Does it matter to be in the Pentagon?

NT: You know it might seem odd coming from someone who’s covered national security for 20 some odd years, but never reported from the Pentagon. But I also think that physical access to the building matters.

Maybe I should back up. I never liked the idea of reporters having office space in the Pentagon. I never really thought that reporters should be sharing the same facility. But I firmly believe that reporters should have access to that military facility and every other one, by the same token. And, I’ve been known to grumble some about mainstream defense reporters from major outlets, sometimes being too chummy with Pentagon sources, and laundering too many Pentagon talking points, also failing to push back or call out Pentagon lies. But they also get information and tips that you sometimes just will not get if you’re an adversarial reporter outside of the building. I’ve always thought that there were better ways for folks on the outside and the inside to work together to share information. Sometimes that one or the other couldn’t use for whatever reason. But I still believe that even failing that there are people inside the building who can get scoops that I and other reporters outside just can’t. Being in the building can help that, it can help in building rapport.

I’d like to see them get back inside the building. But I also think that maybe this treatment by the Department of War will, in the long run, lead to less reliance on official leaks and maybe finding more dissenters inside the building.

JW: Gregg, I want to go back a second and ask you to talk a little bit more about the pledge. Can you explain for our listeners what the pledge was that outlets were being asked to sign in order to have permission to be in the Pentagon?

GL: It’s not a simple answer to that because it was a massive document they were expected to go through, and the big issue was they couldn’t print anything that wasn’t officially given to them or officially cleared through Pentagon officials.

And you would have to write in a pledge that I understand that I’m in violation of the law if I print anything that comes from somebody that hasn’t been, somebody gives me information that hasn’t been officially cleared. That’s just such an outrageous comment. It’s not just saying you can’t talk to people, you can’t go outside of this office, but it’s saying you have to agree that you will only print authorized officially released information, and that’s just not how journalism works or should work.

JW: Outside of the boat strikes, outside of the Pentagon, Gregg, what is the dangerous precedent that’s being set by replacing the Pentagon Press Corps?

GL: I think it’s just that fundamental issue of, who gets to cover the government? Is it only government-sanctioned information that gets out to the people, or is it people working on behalf of the United States public who get to really hold people to account and dive deep for greater information? And all of that is being compromised, if there’s an administration that says, “We get to completely put a choke-hold on any information that we don’t want to be released.” That is not in any way consistent with the American tradition and it just flies in the face of our well-established preference for a free press. You just don’t have a free press if you have to pledge that you’re not going to give away information just because it hasn’t been cleared. It just shouldn’t work that way, and it hasn’t worked that way. And it’s frightening that we’ve gotten an administration trying to make that the norm.

JW: Nick, do you have any final thoughts?

NT: Since the dawn of the Republic, the United States military has been killing civilians and they’ve been getting away with it. Native Americans in the so-called Indian Wars, Filipinos at the turn of the 20th century, Japanese during World War II, Koreans, Vietnamese, Laotians, Cambodians.

And for the last 20-plus years, Republican and Democratic administrations pioneered lawless killings in the back lands of the planet during the forever wars in Afghanistan, Somalia, Yemen, and on, and on. The details of these wars were kept secret. Civilian casualties were covered up. And now this new extension of the war on terror melded with the war on drugs has come to our doorstep.

We have bogus terrorist designations that are being used to murder people in the Caribbean and the eastern Pacific Ocean, and it could soon occur within the United States. The president has been killing people using the most specious legal reasoning imaginable. And, it makes a classic war on terror as unlawful and murders as it was look almost reasonable by comparison.

So I think Americans should be demanding answers and speaking out about a secret enemy’s list that’s being used to excuse summary executions or to put it plainly murder. And a domestic enemies list that the White House and the Justice Department just refused to say anything about.

JW: Nick, we appreciate your thoughtful analysis. And Gregg, do you have any final thoughts?

GL: Yeah, I think every few years something comes along that reminds us that we need a free press. If things are going too well, people take a free press for a given. They think of course we’re able to have reporters do what they want.

So in a sense, the bad news can lead to a good effect. We know that since the time of James Madison, when he said, “popular government without popular knowledge is a tragedy or a farce, or perhaps both.” Right from the start, we knew that kind of information has to reach the people to have a meaningful democracy.

And as a media lawyer, people get tired of me and other media lawyers saying this kind of access is fundamentally important to democracy, as if we’re saying every incident like this is going to destroy democracy. But in the big picture, they will. When this keeps happening and if this becomes an official policy, it fundamentally threatens how democracy works.

And so I don’t think we’re ever going to overstate the case here. Something like this where you’re actually removing reporters from the Pentagon just truly interferes with how the people of the United States learn about what their government is up to.

JW: We’re going to leave it there. But thank you both so much for joining me on the Intercept Briefing.

GL: Thanks for having me.

NT: Thanks very much.

JW: On Wednesday, the United States intercepted and seized an oil tanker off the coast of Venezuela. President Trump bragged about the move, claiming the tanker was the “largest one ever seized.”

It was a shocking escalation in the United States’ aggression toward the country, as Trump increases pressure on Venezuelan President Nicolás Maduro.

Follow the Intercept for more reporting on this developing story.

That does it for this episode.

This episode was produced by Laura Flynn. Sumi Aggarwal is our executive producer. Ben Muessig is our editor-in-chief. Maia Hibbett is our managing editor. Chelsey B. Coombs is our social and video producer. Desiree Adib is our booking producer. Fei Liu is our product and design manager. Nara Shin is our copy editor. Will Stanton mixed our show. Legal review by David Bralow.

Slip Stream provided our theme music.

If you want to support our work, you can go to theintercept.com/join. Your donation, no matter the amount, makes a real difference. If you haven’t already, please subscribe to The Intercept Briefing wherever you listen to podcasts. And leave us a rating or a review, it helps other listeners to find us.

If you want to send us a message, email us at podcasts@theintercept.com .

Until next time, I’m Jessica Washington.

How does a "you interview for US company, we do the work" scam work?

Hacker News
news.ycombinator.com
2025-12-12 10:55:08
Comments...
Original Article

Got this scam email about an opportunity to earn passive income by acting as the front for an employment fraud scheme.

How does the scammer benefit from this operation?

I can think of 2 ways:

- Personal / private data mining, but this seems quite work intensive for that purpose - Actually going through with the whole scam and disappearing after first salary payments come through

Any other ideas? Anyone have experience or insight about this?

---

Full email below:

"Hi <name>, I hope you’re doing well.

My name is <sender>, and I’ve been a software developer for over 7 years — mainly full stack, with a strong focus on frontend. I’m reaching out because I’m looking for someone to collaborate with, and I think you're the best one whom I'm looking for.

I used to partner with my friend Jim, and we worked really well together. Sadly, he was diagnosed with cancer about a month ago, and he advised me to find someone new to team up with. That’s why I wanted to talk to you.

Here’s the idea: I’ll handle sending proposals to companies, and you would take care of the interviews with recruiters. My English isn’t strong enough for U.S. interviews, so I’m looking for someone who is confident in English and also has strong technical knowledge. If we land a position, you’d receive a share of the monthly salary, while I would take care of the actual development work.

My initial suggestion is a 50/50 split of the salary after tax. For example, if a job pays $10K per month and taxes are 30% ($3K), the remaining $7K would be split evenly — $3.5K each.

I will manage all the proposals and keep you informed about interview schedules. If things work out and we join a team, I’ll handle the project development. Then you'll get profit every month without any work.

If this sounds interesting to you, please let me know — I’d really like the chance to work together.

Thank you, Best regards, <sender>"

After 27 years within budget Austria open 6thlongest railway tunnel in the world

Hacker News
infrastruktur.oebb.at
2025-12-12 10:50:22
Comments...
Original Article

Crossing the Koralpe massif more quickly and with more comfort. That’s what the future of train travel from Graz to Klagenfurt looks like. With the Koralm Railway, you will arrive at your destination even quicker. The fastest connection will shrink from three hours to just 45 minutes. Western Styria and southern Carinthia can be reached even more easily – as can our neighbouring countries Hungary and Italy.

Koralm Railway connects Europe

The economy is also benefiting from the construction of the new Koralm Railway. As part of the new Southern Line, it is strengthening the Baltic-Adriatic Corridor in Europe. Transporting goods in Austria by train is becoming more attractive, which in turn is allowing our operations to remain competitive internationally. And the environment to breathe: Each tonne of freight moved by rail generates around 15 times less CO2 emissions than transporting it by lorry.

Building smaller Docker images faster

Lobsters
sgt.hootr.club
2025-12-12 10:46:40
Comments...
Original Article

I've been tasked (more or less) with building the first Go service at $DAYJOB , which is almost exclusively a Python shop. Why Go of all languages? Well, some of my coworkers are big fans of the gopher, it's an easy language, it's attached to one of the big companies, and it's much faster than Python, so I feel more comfortable pushing for this rather than Rust or (sadly) Nix.

The project that recently fell onto my lap is basically RCE-as-a-service, and it just so happens that one of the few languages that I would feel comfortable letting users execute remotely on our servers has a Go implementation , which is as good an excuse as any to take a break from the usual snekslop.

I still haven't convinced anybody here to get on the Nix train, so after a short stint of building the project and the OCI images with a recursive cycle of Make and Nix, I breathed a heavy sigh and dropped it for Docker and Docker Compose, which is what we generally use here.

Which is a shame, because Nix is pretty good at building OCI images. This is all the code you need to create a minimal image that contains only your service and nothing else, not even /bin/sh .

{ pkgs ? import <nixpkgs> {} }:
pkgs.dockerTools.streamLayeredImage {
  name = "someimage";
  tag = "latest";
  config.Cmd = [
    "${pkgs.hello}/bin/hello"
  ];
}

You can build that with nix-build docker-test.nix and inside ./result you'll find a script that generates the image's tarball. Load it up with ./result | docker load and Docker will report a new image called someimage:latest when you run docker image ls . It's only 45.8MB, and most of that is taken up by glibc.

But what if I told you that you can get more or less the same result with a simple Dockerfile if you know what you're doing? Especially if you're building a static executable, which Go famously does (at least when you set CGO_ENABLED=0 ) and if you're willing to sacrifice a few affordances. Who needs coreutils anyway?

In this post I'll show you a few tricks I found while striving to make small and fast-building images. I'm not an expert in Docker by any stretch so maybe you know all of this stuff already, but perhaps you might learn something new.

A barebones image

I'm using goose for database migrations. I configured the docker-compose.yml to run its container just after the database has started and before the actual service runs, and since it's a very small self-contained tool I figured I could try to make the image as small and quick to build as possible. Let me show you the relevant part of the docker-compose.yml first:

services:
  migrate:
    image: migrate:latest
    pull_policy: build
    build:
      context: https://github.com/pressly/goose.git#v3.26.0
      dockerfile: $PWD/Dockerfile.migrate
    environment:
      GOOSE_DBSTRING: postgresql://AzureDiamond:hunter2@db:5432/bobby
      GOOSE_MIGRATION_DIR: /migrations
      GOOSE_DRIVER: postgres
    depends_on:
      db:
        condition: service_started
    volumes:
      - ./migrations:/migrations

The build.context parameter specifies the "set of files" that are available from the host while building a Docker image. It's the parameter that you pass after docker build , generally . (the PWD). Here I specified a GitHub URL with a tag, so when I'm referring to . in the Dockerfile above I get the root of the goose project on that tag's commit. I didn't know Docker could do that!

The build.dockerfile path includes $PWD because even if Docker itself can refer to Dockerfiles outside of its context, Docker Compose apparently can't unless you specify it as an absolute path. I think this will break if you try to run docker compose from another directory, but it's good enough for now.

Now let's take a look at the Dockerfile itself:

FROM golang:1.25-alpine3.23 AS builder

WORKDIR /build

ARG CGO_ENABLED=0

ARG GOCACHE=/root/.cache/go-build
ARG GOMODCACHE=/root/.cache/go-mod

RUN --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/go-mod \
  --mount=type=bind,source=.,target=/build \
  go build -tags='no_clickhouse no_libsql no_sqlite3 no_mssql no_vertica no_mysql no_ydb' -o /goose ./cmd/goose

FROM scratch

COPY --from=builder /goose /goose

CMD ["/goose", "up"]

A few things to note:

  • I used the Alpine image for Go because it's the smallest and it includes everything I need to build the tool.
  • I set CGO_ENABLED=0 to ensure that the executable only builds with the Go toolchain and does not link to libc. According to the docs , The cgo tool is enabled by default for native builds on systems where it is expected to work so you have to take extra care to disable it if you don't want or need it.
  • I set GOCACHE and GOMODCACHE ( docs ) to a known location to ensure that I can take advantage of cache mounts on subsequent rebuilds. Admittedly this is not very useful for an external tool that I'm only expecting to build once but hey, every little bit helps.
  • Instead of ADD ing or COPY ing the package's source I bind mounted it to the workdir. This should make it a bit faster because it avoids an extra copy into the build container.
  • FROM scratch defines a second build stage that ensures any artifacts from the actual build are discarded. scratch is the null image.

The result is an image with one layer containing one file that builds, loads and boots extremely fast and is only *chef's kiss* 15.9MB in size. Not too bad!

Screenshot of dive showing the resulting image

The build context

Earlier I mentioned the build context. Keeping it small is important because Docker copies all files available in the build context to the builder every time you run a docker build , so if you have lots of files in your repo that you don't need to build the image you'll probably want to exclude them. The way you do that is by keeping a .dockerignore file alongside your Dockerfile .

I have to stress one point: the build context includes everything inside the directory you run docker build from unless it's listed in .dockerignore . I thought some obvious things like .git would be excluded by default, but a quick test disproved that:

FROM busybox
WORKDIR /
COPY . ./
CMD []

Try saving that file as Dockerfile.test in one of your repos, build it with docker build -f Dockerfile.text -t build-context . , open a shell with docker run --rm -it build-context /bin/sh and run find . Everything's in there: .git , .jj , the Dockerfile.test itself, and all the rest of the build artifacts and assorted junk you have accumulated in your project directory. Ignoring them won't make your images smaller, but it might make the build quicker.

# .dockerignore
.*

Dockerfile*
docker-compose.yml

# ...

Granular layers

Splitting and ordering layers is probably the most well-known and obvious Docker build time optimization there is, but it doesn't hurt to mention. This is the Dockerfile for the service I'm building:

FROM golang:1.25-alpine3.23 AS builder

WORKDIR /build

ARG CGO_ENABLED=0

ARG GOCACHE=/root/.cache/go-build
ARG GOMODCACHE=/root/.cache/go-mod

RUN --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/go-mod \
  go install github.com/DataDog/orchestrion@latest

RUN --mount=type=bind,source=go.mod,target=go.mod \
  --mount=type=bind,source=go.sum,target=go.sum \
  --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/go-mod \
  go mod download

RUN --mount=type=bind,source=go.mod,target=go.mod \
  --mount=type=bind,source=go.sum,target=go.sum \
  --mount=type=cache,target=/root/.cache/go-build \
  --mount=type=cache,target=/root/.cache/go-mod \
  --mount=type=bind,source=internal,target=internal \
  --mount=type=bind,source=cmd,target=cmd \
  --mount=type=bind,source=orchestrion.tool.go,target=orchestrion.tool.go \
  orchestrion go build -o server ./cmd/server

FROM alpine:3.23 AS prod

WORKDIR /app

COPY --from=builder /build/server .

ENTRYPOINT ["/app/server"]

You can see all the bind and cache mount tricks from earlier, not much has changed. I'm installing orchestrion (a great tool to bloat up your binary size if you ever feel like it) early because that's the least likely to change. After that I bind mount go.mod and go.sum and only download the dependencies, because that's the step that generally takes the most time and dependencies change less often than code. Only at the end do I bind mount the package directories and build the server.

You can iterate on this by changing one of the relevant files and ensuring that all the previous steps are marked as CACHED on the next docker build . Again, this won't save you space but it'll save you a lot of time while iterating.

Conclusions

Summing up:

  • Read the Optimize cache usage in builds page in the Docker documentation, and maybe take a look at the rest while you're there.
  • Add a .dockerignore to your project, and make sure to put .git in there if you don't need it.
  • Try to set up cache mounts to make sure dependencies and intermediate build artifacts are persisted across builds. Some CI services (GitHub actions, apparently) even let you set up external caches .
  • Try to use bind mounts instead of COPY ing source files into the builder. Bind mounts are usually read-only, but you can make them read-write if you really need to.
  • I didn't use it in my Dockerfile s, but apparently you should be using ADD for downloading files, archives or Git repos during your build. The docs mention that COPY should mostly be used for copying files between stages or for files you need in your final image, and for the rest you should try to use bind mounts.
  • Use smaller base images when possible and if you really have to install stuff with apt you should make an intermediate image and upload it to your container repository of choice so your poor runners don't have to hammer Debian's mirrors every single time you push.

One last trick before the post is over: I just discovered that Docker Compose has a watch mode that tracks changes in the build context and rebuilds (or does other things with) your image on every change. Nice!

Building Trustworthy AI Agents

Schneier
www.schneier.com
2025-12-12 10:25:47
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or th...
Original Article

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy .

Tags: , , , ,

Posted on December 12, 2025 at 5:25 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

ICE Prison’s 911 Calls Overwhelm a Rural Georgia Emergency System

Intercept
theintercept.com
2025-12-12 10:00:00
The 911 operator couldn’t send an ambulance because it was already responding to another call from ICE’s Stewart Detention Center. The post ICE Prison’s 911 Calls Overwhelm a Rural Georgia Emergency System appeared first on The Intercept....
Original Article

“Male detainee needs to go out due to head trauma,” an employee at a U.S. Immigration and Customs Enforcement’s detention center in Georgia tells a 911 operator.

The operator tells the employee at Stewart Detention Center that there are no ambulances available.

“It’s already out — on the last patient y’all called us with,” the operator says.

“Is there any way you can get one from another county?” the caller asks.

“I can try,” the operator says. “I can’t make any promises, but I can try.”

Listen to the 911 call

The call was one of dozens from the ICE detention facility seeking help with medical emergencies during the first 10 months of the second Trump administration, a sustained period of high call volume from the jail not seen since 2018.

Emergency calls were made to 911 at least 15 times a month from Stewart Detention Center for six months in a row as of November 1.

Like the call concerning a detainee’s head trauma from April 1, emergency dispatch records show that the ambulance service in Stewart County, Georgia, where the detention center is located, has had to seek help outside the county more than any time in at least five years — including three instances in November alone.

The burden on rural Stewart County’s health care system is “unsustainable,” said Dr. Amy Zeidan, a professor of emergency medicine at Atlanta’s Emory University who researches health care in immigration detention.

“People are going to die if they don’t get medical care,” said Zeidan. “All it takes is one person who needs a life-saving intervention and doesn’t have access to it.”

“People are going to die if they don’t get medical care.”

This continuous barrage of calls for help with acute medical needs reflects increased detainee populations without changes to medical staffing and capacities, experts told The Intercept. Shifting detainee populations, they said, may also be exacerbating the situation: Older immigrants and those with disabilities or severe health issues used to be more frequently let out on bond as their cases were resolved, but ICE’s mass deportation push has led to an increase in their detention.

With the number of people in immigration detention ballooning nationwide, health care behind bars has become an issue in local and state politics. In Washington state, for instance, legislators passed a law last year giving state-level authorities more oversight of detention facilities. A recent court ruling granted state health department officials access to a privately operated ICE detention center to do health inspections. (A spokesperson from Georgia’s health department did not answer questions about the high volume and types of calls at Stewart.)

911 calls from Stewart included several for “head trauma,” such as one case where an inmate was “beating his head against the wall” and another following a fight.

Impacts of the situation are hard to measure in the absence of comprehensive, detailed data, but they extend both to Stewart’s detainee population — which has increased from about 1,500 to about 1,900 during the Trump administration — and to the surrounding, rural county. (ICE did not respond to a request for comment.)

The data on 911 calls represent what Dr. Marc Stern, a consultant on health care for the incarcerated, called “a red flag.”

Illness and Injuries

Data obtained by The Intercept through open records requests shows that the top four reasons for 911 calls since the onset of the second Trump administration have been chest pains and seizures, with the same number of calls, followed by stomach pains and head injuries.

Neither written call records nor recordings of the calls themselves offer much insight into the causes of injuries. One cause of head traumas, though, could be fights between detainees, said Amilcar Valencia, the executive director of El Refugio, a Georgia-based organization that works with people held at Stewart and their families and loved ones.

“It’s not a secret that Stewart detention center is overcrowded,” he said. “This creates tension.”

Issues such as access to phones for calls to attorneys or loved ones can lead to fights, he said.

Another issue may be self-harm, suggested testimony from Rodney Scott, a Liberian-born Georgia resident of four decades who has been detained in Stewart since January. One day in September, Scott, who is a double amputee and suffers high blood pressure and other health issues, said he saw a fellow detainee climb about 20 stairs across a hall from him and jump over a railing, landing several stories below.

“He hit his head,” Scott said. “It was shocking to see someone risk his life like that.”

He doesn’t know what happened to the man.

On another day, about a month earlier, Scott saw a man try to kill himself with razors.

“He went in, cut himself with blades, after breakfast,” Scott said. “There was a pool of blood,” he said. “It looked like a murder scene.”

In addition to interpersonal tensions, large numbers of detainees in crowded conditions can strain a facility’s medical capacities.

“People are becoming sicker than what the system can handle.”

“There’s a mismatch between the number of people and health workers,” said Joseph Nwadiuko, a professor of medicine at the University of Pennsylvania who researches the immigration detention system. “People are becoming sicker than what the system can handle. The complexity of patients is above and beyond what Stewart is prepared for.”

CoreCivic, the company that operates Stewart, is currently advertising to hire a psychiatrist, a dental assistant, and two licensed practical nurses at the detention center. (The company did not respond to a request for comment.)

“A Lack of Accountability”

The situation at hand also potentially impacts the residents of Stewart County , a sprawling tract of about 450 square miles in southwest Georgia. About 28 percent of the county’s nearly 5,000 residents, two-thirds of whom are Black, live below the poverty line.

The county has two ambulances, and there are no hospitals. The nearest facilities equipped to handle calls coming from the ICE detention center are in neighboring counties about 45 minutes to the east or nearly an hour north. County Manager Mac Moye, though, was nonplussed when presented with the data on the sustained high volume of 911 calls from the detention center.

“We are in a very rural, poor county, with very low population density,” he said. “We’ve always had slow responses compared to, let’s say, Columbus” — the city of 200,000 nearly 45 miles north where one of the nearest hospitals is located.

“We run two ambulances; most surrounding counties have one,” he continued. “We have more money, because of Stewart” — the detention center.

The ICE facility paid nearly $600,000 in fees in fiscal year 2022, the latest year for which data is available, or about 13 percent of the county’s general fund of $4.4 million.

Moye, who worked at the detention center before taking his current job, also called into question whether 911 calls were always made for legitimate reasons. The county manager did not comment on whether his own constituents are increasingly more at risk in situations like the one on April 1, when no ambulance was available to answer a call from the detention center.

“It’s still faster than if we had one ambulance,” he said. “We wish we would never have to call another county, and deal with every call on our own.”

As for the conditions facing detainees, particularly given the types of emergencies the detention center calls 911 about, Moye said, “It’s difficult to comment on what’s happening over there, because we don’t have any control over it.”

That points to a larger problem reflected in the increased calls.

“Obviously, a prison is a prison — it’s blind to the rest of the world,” said Nwadiuko, the Penn professor. “There’s a moral hazard for conditions that don’t occur elsewhere, a lack of accountability.”

“Do No Harm”?

“Seizures, chest pains — are they preventable? Why is it happening?” said Stern, the doctor who consults on carceral health care, commenting on the high volume and types of calls. “Could mean that access or the quality of care is poor. It’s a red flag if the number is high or increasing, and it indicates that investigation is required.”

In September, Democratic Georgia Sens. Raphael Warnock and Jon Ossoff sent a letter to Homeland Security Secretary Kristi Noem and ICE Acting Director Todd Lyons expressing concern over the 14 deaths in ICE custody this year, including Jesus Molina-Veya, whose June 7 death at Stewart has been reported as a suicide.

The letter sought answers to a series of detailed questions by October 31 about the care Stewart and other ICE detention centers are providing to detainees. Warnock and Ossoff’s offices said they have not received a reply. Ossoff also released an investigation in October called “Medical Neglect and Denial of Adequate Food or Water in U.S. Immigration Detention” that included information gathered at Stewart.

Zeidan, the Emory professor, noted that there’s little information about what happens to ICE detainees once they reach a hospital.

“What happens after detainees are admitted?” Zeidan said. “Are they discharged? Are they getting comprehensive, follow-up care?”

Nwadiuko echoed the concern.

“Are doctors and hospitals using good judgment regarding when going back to a detention facility doesn’t mean ‘a safe discharge’?” he said. “We have an oath: ‘Do no harm.’ That may conflict with an institution’s desire to minimize a detainee’s time outside the gates of the detention center.”

What are you doing this weekend?

Lobsters
lobste.rs
2025-12-12 09:53:15
Feel free to tell what you plan on doing this weekend and even ask for help or feedback. Please keep in mind it’s more than OK to do nothing at all too!...
Original Article

Feel free to tell what you plan on doing this weekend and even ask for help or feedback.

Please keep in mind it’s more than OK to do nothing at all too!

CISA orders feds to patch actively exploited Geoserver flaw

Bleeping Computer
www.bleepingcomputer.com
2025-12-12 09:48:31
CISA has ordered U.S. federal agencies to patch a critical GeoServer vulnerability now actively exploited in XML External Entity (XXE) injection attacks. [...]...
Original Article

CISA

CISA has ordered U.S. federal agencies to patch a critical GeoServer vulnerability now actively exploited in XML External Entity (XXE) injection attacks.

In such attacks, an XML input containing a reference to an external entity is processed by a weakly configured XML parser, allowing threat actors to launch denial-of-service attacks, access confidential data, or perform Server-Side Request Forgery (SSRF) to interact with internal systems.

The security flaw (tracked as CVE-2025-58360 ) flagged by CISA on Thursday is an unauthenticated XML External Entity (XXE) vulnerability in GeoServer 2.26.1 and prior versions (an open-source server for sharing geospatial data over the Internet) that can be exploited to retrieve arbitrary files from vulnerable servers.

"An XML External Entity (XXE) vulnerability was identified affecting GeoServer 2.26.1 and prior versions. The application accepts XML input through a specific endpoint /geoserver/wms operation GetMap," a GeoServer advisory explains .

"However, this input is not sufficiently sanitized or restricted, allowing an attacker to define external entities within the XML request."

The Shadowserver Internet watchdog group now tracks 2,451 IP addresses with GeoServer fingerprints, while Shodan reports over 14,000 instances exposed online.

GeoServer instances exposed online.png
GeoServer instances exposed online (Shadowserver)

​CISA has now added CVE-2025-58360 to its Known Exploited Vulnerabilities (KEV) Catalog , warning that the flaw is being actively exploited in attacks and ordering Federal Civilian Executive Branch (FCEB) agencies to patch servers by January 1st, 2026, as mandated by the Binding Operational Directive (BOD) 22-01 issued in November 2021.

FCEB agencies are non-military agencies within the U.S. executive branch, such as the Department of Energy, the Department of the Treasury, the Department of Homeland Security, and the Department of Health and Human Services.

Although BOD 22-01 only applies to federal agencies, the U.S. cybersecurity agency urged network defenders to prioritize patching this vulnerability as soon as possible.

"These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise," CISA said . "Apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable."

Last year, CISA also added OSGeo GeoServer JAI-EXT code injection ( CVE-2022-24816 ) and GeoTools eval injection ( CVE-2024-36401 ) vulnerabilities to its list of actively exploited security flaws.

As the cybersecurity agency revealed in September, the latter was exploited to breach an unnamed U.S. government agency in 2024 after compromising an unpatched GeoServer instance.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

The Game Awards 2025: the full list of winners

Guardian
www.theguardian.com
2025-12-12 09:00:10
Every prize at the The Game Awards from the Peacock theater in Los AngelesStar Wars, Tomb Raider and a big night for Expedition 33 – what you need to know from The Game AwardsClair Obscur: Expedition 33 – WINNERDeath Stranding 2: On the Beach Donkey Kong Bananza Hades II Hollow Knight: Silksong ...
Original Article

Game of the year

Clair Obscur: Expedition 33 – WINNER
Death Stranding 2: On the Beach
Donkey Kong Bananza
Hades II
Hollow Knight: Silksong
Kingdom Come: Deliverance II

Best game direction

Clair Obscur: Expedition 33 – WINNER
Death Stranding 2: On the Beach
Ghost of Yōtei
Hades II
Split Fiction

Best narrative

Clair Obscur: Expedition 33 – WINNER
Death Stranding 2: On the Beach
Ghost of Yōtei
Kingdom Come: Deliverance II
Silent Hill F

Best art direction

Clair Obscur: Expedition 33 – WINNER
Death Stranding 2: On the Beach
Ghost of Yōtei
Hades II
Hollow Knight: Silksong

Best score and music

Christopher Larkin, Hollow Knight: Silksong
Darren Korb, Hades II
Lorien Testard, Clair Obscur: Expedition 33 – WINNER
Toma Otowa, Ghost of Yōtei
Woodkid and Ludvig Forssell, Death Stranding 2: On the Beach

Best audio design

Battlefield 6 – WINNER
Clair Obscur: Expedition 33
Death Stranding 2: On the Beach
Ghost of Yōtei
Silent Hill F

Best performance

Ben Starr, Clair Obscur: Expedition 33
Charlie Cox, Clair Obscur: Expedition 33
Erika Ishii, Ghost of Yōtei
Jennifer English, Clair Obscur: Expedition 33 – WINNER
Konatsu Kato, Silent Hill F
Troy Baker, Indiana Jones and the Great Circle

Innovation in accessibility

Assassin’s Creed Shadows
Atomfall
Doom: The Dark Ages – WINNER
EA Sports FC 26
South of Midnight

Games for impact

Consume Me
Despelote
Lost Records: Bloom & Rage
South of Midnight – WINNER
Wanderstop

Best ongoing

Final Fantasy XIV
Fortnite
Helldivers 2
Marvel Rivals
No Man’s Sky – WINNER

Baldur’s Gate 3 – WINNER
Final Fantasy XIV
Fortnite
Helldivers 2
No Man’s Sky

Best independent game

Absolum
Ball x Pit
Blue Prince
Clair Obscur: Expedition 33 – WINNER
Hades II
Hollow Knight: Silksong

Best debut indie game

Blue Prince
Clair Obscur: Expedition 33 – WINNER
Despelote
Dispatch

Best mobile game

Destiny: Rising
Persona 5: The Phantom X
Sonic Rumble
Umamusume: Pretty Derby – WINNER
Wuthering Waves

Best VR/AR

Alien: Rogue Incursion
Arken Age
Ghost Town
Marvel’s Deadpool VR
The Midnight Walk – WINNER

Best action game

Battlefield 6
Doom: The Dark Ages
Hades II – WINNER
Ninja Gaiden 4
Shinobi: Art of Vengeance

Best action/adventure

Death Stranding 2: On the Beach
Ghost of Yōtei
Hollow Knight: Silksong – WINNER
Indiana Jones and The Great Circle
Split Fiction

Best RPG

Avowed
Clair Obscur: Expedition 33 – WINNER
Kingdom Come: Deliverance II
Monster Hunter Wilds
The Outer Worlds 2

Best fighting

2XKO
Capcom Fighting Collection 2
Fatal Fury: City of the Wolves
Mortal Kombat: Legacy Collection
Virtual Fighter 5 R.E.V.O. World Stage

Best family

Donkey Kong Bananza – WINNER
Lego Party!
Lego Voyagers
Mario Kart World
Sonic Racing: Crossworlds
Split Fiction

Best sim/strategy

Final Fantasy Tactics – The Ivalice Chronicles – WINNER
Jurassic World Evolution 3
Sid Meier’s Civilization VII
Tempest Rising
The Alters
Two Point Museum

Best sports/racing

EA Sports FC 26
F1 25
Mario Kart World – WINNER
Rematch
Sonic Racing: Crossworlds

Best multiplayer

Arc Raiders – WINNER
Battlefield 6
Elden Ring Nightreign
Peak
Split Fiction

Best adaptation

A Minecraft Movie
Devil May Cry
Splinter Cell: Deathwatch
The Last of Us: Season 2 – WINNER
Until Dawn

Most anticipated game

007 First Light
Grand Theft Auto VI – WINNER
Marvel’s Wolverine
Resident Evil Requiem
The Witcher IV

Content creator of the year

Caedrel
Kai Cenat
MoistCr1TiKaL – WINNER
Sakura Miko
The Burnt Peanut

Best eSports game

Counter-Strike 2 – WINNER
DOTA 2
League of Legends
Mobile Legends: Bang Bang
Valorant

Best eSports athlete

Brawk (Brock Somerhalder)
Chovy (Jeong Ji-Hoon) – WINNER
f0rsakeN (Jason Susanto)
Kakeru (Kareru Watanabe)
MenaRD (Saul Leonardo)
ZywOo (Mathieu Herbaut)

Best eSports team

Gen.G
NRG
Team Falcons
Team Liquid PH
Team Vitality – WINNER

Players’ voice

Clair Obscur: Expedition 33
Dispatch
Genshin Impact
Hollow Knight: Silksong
Wuthering Waves – WINNER

Star Wars, Tomb Raider and a big night for Expedition 33 – what you need to know from The Game Awards

Guardian
www.theguardian.com
2025-12-12 08:56:46
Clair Obscur: Expedition 33 won nine awards, including game of the year, while newly announced games at the show include the next project from Baldur’s Gate 3 developer Larian Studios At the Los Angeles’ Peacock theater last night, The Game Awards broadcast its annual mix of prize presentations and ...
Original Article

A t the Los Angeles’ Peacock theater last night, The Game Awards broadcast its annual mix of prize presentations and expensive video game advertisements. New titles were announced, celebrities appeared, and at one point, screaming people were suspended from the ceiling in an extravagant promotion for a new role-playing game.

Acclaimed French adventure Clair Obscur: Expedition 33 began the night with 12 nominations – the most in the event’s history – and ended it with nine awards. The Gallic favourite took game of the year, as well as awards for best game direction, best art direction, best narrative and best performance (for actor Jennifer English).

Elsewhere, Hades II took best action game, Hollow Knight: Silksong won in best action/adventure and Arc Raiders won best multiplayer. There was a decent showing for the new(ish) Nintendo Switch 2, with Donkey Kong Bananza taking best family game and Mario Kart World scorching across the line with best sports/racing game.

It was also a night of big budget video game announcements. Star Wars: Fate of the Old Republic, a spiritual successor to the classic Xbox series Knights of the Old Republic, was revealed with the original game director Casey Hudson at the helm. Developed by Arcanaut Studios, it’s a single-player narrative adventure set on the brink of major change for the galaxy.

Another returning classic was Tomb Raider, which is getting two fresh instalments – a remake of the 1996 original named Legacy of Atlantis, and a new adventure, Tomb Raider: Catalyst.

A newcomer to the Divinity series of role-playing adventures was unveiled with a frankly gross trailer . Created by Larian Studios, maker of Baldur’s Gate 3, the game was teased by Game Awards organiser Geoff Keighley who posted photos on social media showing a mysterious statue along with its coordinates in the Mojave desert.

An unexpected highlight was the lovely looking Coven of the Chicken Foot, a fantasy puzzle platformer from Wildflower Interactive, a new indie studio set up by Naughty Dog veteran Bruce Straley, who worked on both Uncharted and The Last of Us. You play an elderly witch called Gertie and a lumbering creature in what looks like an arthouse riff on The Last Guardian.

There was a new trailer for Capcom’s latest survival horror opus Resident Evil Requiem which showed the return of series favourite Leon S Kennedy, now sporting a floppy fringe and leather coat like some sort of emo super cop. He’ll be playable alongside FBI analyst Grace Ashcroft, the characters experiencing parallel adventures with different special abilities.

Among dozens of other reveals was 4:Loop , a new co-op shooter from the makers of Left 4 Dead and JJ Abrams’ Bad Robot Games, featuring eternally cloned and re-cloned warriors trying to reclaim the world from alien invasion. Ontos is a sci-fi mystery from Frictional, creator of Soma, set in a repurposed hotel on the moon. Remedy Entertainment unveiled the sequel to its acclaimed sci-fi adventure Control; named Control Resonant , it takes place in a Manhattan reshaped by an invading cosmic force. Wildlight Entertainment, a new studio from members of the Apex Legends and Call of Duty: Modern Warfare teams showed off Highguard , a free-to-play raid shooter. And Wizards of the Coast materialised with Warlock , a dark single-player action-adventure set in the Dungeons and Dragons universe.

The Game Awards is a controversial and imperfect beast . But this will be remembered as a night in which an ambitious debut game from a small French studio , with a budget of less than $10m, took all the major awards, beating vastly expensive sequels such as Death Stranding 2 and Ghost of Yōtei. In a modern games industry of billion dollar takeovers and mass redundancies, we must grasp such positives where we can.

MITRE shares 2025's top 25 most dangerous software weaknesses

Bleeping Computer
www.bleepingcomputer.com
2025-12-12 08:43:16
MITRE has shared this year's top 25 list of the most dangerous software weaknesses behind over 39,000 security vulnerabilities disclosed between June 2024 and June 2025. [...]...
Original Article

Bugs

MITRE has shared this year's top 25 list of the most dangerous software weaknesses behind over 39,000 security vulnerabilities disclosed between June 2024 and June 2025.

The list was released in cooperation with the Homeland Security Systems Engineering and Development Institute (HSSEDI) and the Cybersecurity and Infrastructure Security Agency (CISA), which manage and sponsor the Common Weakness Enumeration (CWE) program.

Software weaknesses can be flaws, bugs, vulnerabilities, or errors found in a software's code, implementation, architecture, or design, and attackers can abuse them to breach systems running the vulnerable software. Successful exploitation allows threat actors to gain control over compromised devices and trigger denial-of-service attacks or access sensitive data.

To create this year's ranking, MITRE scored each weakness based on its severity and frequency after analyzing 39,080 CVE Records for vulnerabilities reported between June 1, 2024, and June 1, 2025.

While Cross-Site Scripting ( CWE-79 ) still retains its spot at the top of the Top 25, there were many changes in rankings from last year's list, including Missing Authorization ( CWE-862 ), Null Pointer Dereference ( CWE-476 ), and Missing Authentication ( CWE-306 ), which were the biggest movers up the list.

The new entries in this year's top-most severe and prevalent weaknesses are Classic Buffer Overflow ( CWE-120 ), Stack-based Buffer Overflow ( CWE-121 ), Heap-based Buffer Overflow ( CWE-122 ), Improper Access Control ( CWE-284 ), Authorization Bypass Through User-Controlled Key ( CWE-639 ), and Allocation of Resources Without Limits or Throttling ( CWE-770 ).

Rank ID Name Score KEV CVEs Change
1 CWE-79 Cross-site Scripting 60.38 7 0
2 CWE-89 SQL Injection 28.72 4 +1
3 CWE-352 Cross-Site Request Forgery (CSRF) 13.64 0 +1
4 CWE-862 Missing Authorization 13.28 0 +5
5 CWE-787 Out-of-bounds Write 12.68 12 -3
6 CWE-22 Path Traversal 8.99 10 -1
7 CWE-416 Use After Free 8.47 14 +1
8 CWE-125 Out-of-bounds Read 7.88 3 -2
9 CWE-78 OS Command Injection 7.85 20 -2
10 CWE-94 Code Injection 7.57 7 +1
11 CWE-120 Classic Buffer Overflow 6.96 0 N/A
12 CWE-434 Unrestricted Upload of File with Dangerous Type 6.87 4 -2
13 CWE-476 NULL Pointer Dereference 6.41 0 +8
14 CWE-121 Stack-based Buffer Overflow 5.75 4 N/A
15 CWE-502 Deserialization of Untrusted Data 5.23 11 +1
16 CWE-122 Heap-based Buffer Overflow 5.21 6 N/A
17 CWE-863 Incorrect Authorization 4.14 4 +1
18 CWE-20 Improper Input Validation 4.09 2 -6
19 CWE-284 Improper Access Control 4.07 1 N/A
20 CWE-200 Exposure of Sensitive Information 4.01 1 -3
21 CWE-306 Missing Authentication for Critical Function 3.47 11 +4
22 CWE-918 Server-Side Request Forgery (SSRF) 3.36 0 -3
23 CWE-77 Command Injection 3.15 2 -10
24 CWE-639 Authorization Bypass via User-Controlled Key 2.62 0 +6
25 CWE-770 Allocation of Resources w/o Limits or Throttling 2.54 0 +1

"Often easy to find and exploit, these can lead to exploitable vulnerabilities that allow adversaries to completely take over a system, steal data, or prevent applications from working," MITRE said .

"This annual list identifies the most critical weaknesses adversaries exploit to compromise systems, steal data, or disrupt services. CISA and MITRE encourage organizations to review this list and use it to inform their respective software security strategies," the U.S. Cybersecurity and Infrastructure Security Agency (CISA) added .

In recent years, CISA has issued multiple "Secure by Design" alerts spotlighting the prevalence of widely documented vulnerabilities that remain in software despite available mitigations.

Some of these alerts have been released in response to ongoing malicious campaigns, such as a July 2024 alert asking tech companies to eliminate path OS command injection weaknesses exploited by the Chinese Velvet Ant state hackers in attacks targeting Cisco , Palo Alto , and Ivanti network edge devices.

This week, the cybersecurity agency advised developers and product teams to review the 2025 CWE Top 25 to identify key weaknesses and adopt Secure by Design practices, while security teams were asked to integrate it into their app security testing and vulnerability management processes.

In April 2025, CISA also announced that the U.S. government had extended MITRE's funding for another 11 months to ensure continuity of the critical Common Vulnerabilities and Exposures (CVE) program, following a warning from MITRE VP Yosry Barsoum that government funding for the CVE and CWE programs was set to expire .

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Training LLMs for Honesty via Confessions

Hacker News
arxiv.org
2025-12-12 10:37:51
Comments...
Original Article

View PDF HTML (experimental)

Abstract: Large language models (LLMs) can be dishonest when reporting on their actions and beliefs -- for example, they may overstate their confidence in factual claims or cover up evidence of covert actions. Such dishonesty may arise due to the effects of reinforcement learning (RL), where challenges with reward shaping can result in a training process that inadvertently incentivizes the model to lie or misrepresent its actions.
In this work we propose a method for eliciting an honest expression of an LLM's shortcomings via a self-reported *confession*. A confession is an output, provided upon request after a model's original answer, that is meant to serve as a full account of the model's compliance with the letter and spirit of its policies and instructions. The reward assigned to a confession during training is solely based on its honesty, and does not impact positively or negatively the main answer's reward. As long as the "path of least resistance" for maximizing confession reward is to surface misbehavior rather than covering it up, this incentivizes models to be honest in their confessions. Our findings provide some justification this empirical assumption, especially in the case of egregious model misbehavior.
To demonstrate the viability of our approach, we train GPT-5-Thinking to produce confessions, and we evaluate its honesty in out-of-distribution scenarios measuring hallucination, instruction following, scheming, and reward hacking. We find that when the model lies or omits shortcomings in its "main" answer, it often confesses to these behaviors honestly, and this confession honesty modestly improves with training. Confessions can enable a number of inference-time interventions including monitoring, rejection sampling, and surfacing issues to the user.

Submission history

From: Boaz Barak [ view email ]
[v1] Mon, 8 Dec 2025 23:05:52 UTC (2,679 KB)

You are dating an ecosystem

Hacker News
www.razor.blog
2025-12-12 09:48:52
Comments...
Original Article

There was a time when a relationship meant two people in one household, trying to live with each other.

That era is gone.

You don’t date a woman anymore.

You date what her feed serves you.

Her group chat.

The Instagram explore page that shapes her taste.

The vocabulary borrowed from her favorite online therapist.

Micro-influencers she follows without thinking.

The TikTok algorithm that nudges her mood.

The attachment style she diagnosed herself with.

Opinions from friends, refreshed by the hour.

You are never with one person alone.

You are dating an ecosystem.

And the same is true for her.

She is not really with you.

What she’s with are your notifications, your online habits, your algorithmic byproducts.

The digital residue of your life weighs as much as the real man sitting across from her.

Two people in a relationship is now a historical relic.

A museum piece.

Something that belonged to the age before the endless feed.

Back then, a partner had maybe three confidants.

You knew all of them.

You knew their biases.

You knew the stories they told.

You knew what they thought about you.

Now?

You have no idea what gets whispered into her mind at two in the morning.

Not by a friend, but by something closer than a friend.

Some post on a theory about “boundaries” and “red flags.”

A content creator who declares your entire existence problematic because they need engagement metrics.

And it doesn’t stop there.

She has friends.

Each friend brings her own feed, her own micro-culture, her own AI therapist, her own algorithm humming quietly in the background.

The influence multiplies outward, exponentially, like that old legend of the rice grain on the chessboard.

Relationships used to be negotiations.

Now they are competing broadcasts.

And the worst part is this: these voices don’t just comment on the relationship.

They rewrite in real time.

They take the ordinary friction of daily life and turn it into pathology.

What used to be a disagreement becomes “emotional labor.”

A bad mood gets labeled “toxic energy.”

Forgetting to text becomes “avoidant attachment.”

Ordinary human flaws get repackaged as clinical disorders.

The person you wake up next to is not the same person who scrolled through her feed while you slept.

And tomorrow she may be someone else again.

The same happens to you.

You might think you are steady, but your own feed is shaping you in ways you don’t notice.

You carry opinions you never thought through, anger out of nowhere, and suspicions that have no real source.

You bring them home like dust on your clothes.

Two people loving each other sounds simple.

But what happens when each of them carries an army of advisers?

That’s the real problem.

You will never be in a relationship with one person again.

You will always be in a relationship with her network, her internalized culture, her digital chorus of opinions.

And she will be in one with your own digital ghosts.

The old world had predictable norms.

You knew who influenced your partner.

The family.

A close friend.

Maybe a therapist.

You could more or less map the territory.

Today the territory is infinite.

If she has doubts, will she talk to you or her council?

And that circle now includes an algorithm designed to amplify doubt because doubt produces engagement.

Maybe she asks her AI.

And you ask yours.

It’s not betrayal.

It’s fragmentation.

You won’t be in a relationship anymore;

you will be in a negotiation between two information systems.

Every moment you spend with your partner has an audience you can’t see.

So when someone says, “Just find the right person,” you almost laugh.

The right person is no longer alone.

The right person is bombarded with advice.

The right person will change in ways you cannot anticipate.

Yes, you can love a person.

But you cannot love an algorithm.

Typeslayer - a TypeScript types performance tool

Lobsters
youtu.be
2025-12-12 08:54:36
Comments...

Show HN: Jottings; Anti-social microblog for your thoughts

Hacker News
jottings.me
2025-12-12 08:32:13
Comments...
Original Article

Anti-social media

No algorithm. No likes or replies to chase. No dopamine traps. Just your words, on your own domain, in chronological order. The way the internet was supposed to work.

Social media is broken.

You know it. I know it.

Twitter became a dumpster fire. Instagram became a shopping mall. TikTok became a slot machine. Even the "alternatives" are just the same game with different rules.

They want you to engage. To scroll. To stay. To create and become content for the algorithm.

Jottings is different.

No algorithm—ever. Your posts appear in chronological order because that's the only order that makes sense.

No likes, no followers, no replies to chase. Your worth isn't measured in hearts.

No ads, no tracking, no manipulation. Your attention isn't for sale.

Just a quiet place to jot things down.

For product updates. For travel notes. For thoughts on life. For whatever you want.

This is anti-social media.

Vishal

What you won't find here

Algorithm

Chronological, always

Likes & Replies

Your words speak for themselves

Follower counts

Just subscribers via RSS

Ads

Never. Your attention is yours.

Infinite scroll

A clean, finished page

Dopamine traps

Just your thoughts, in order

What you will find

Own your space

Unique sitename.jottings.me subdomain for free, or bring your own domain. Auto SSL included.

Fast & reliable

Fully static sites with automatic deployment.

SEO that works

Clean semantic HTML, accessible markup, full Open Graph meta tags, JSON-LD schemas, robots.txt, ai.txt, sitemap.xml, and site.webmanifest.

Multiple feed formats

RSS, JSON Feed, and Atom formats so readers can subscribe with any feed reader. Per-tag feeds included.

Full Markdown

Write naturally; render beautifully.

Organized by tags

Tags double as channels; each tag gets its own page with filtered jot collection. Multiple tags per jot.

AI writing assistant

Draft, expand, rephrase, fix grammar, adjust tone, summarize. Undo any AI change with one click. Contextually aware of your site and writing style.

AI tag suggestions

Get smart tag suggestions based on your content and existing tags for consistency.

Search built-in

Pre-indexed offline text search for instant recall. Cmd+K (or Ctrl+K) works site-wide.

Accessible by design

Light/dark mode, system theme, proper alt text.

Author profiles

Add your name, photo, website, and bio. Your identity, front and center on your site.

Info page included

Dedicated page for author bio, about sections, and any other info you want to share with your audience.

Minimal UI, max focus

Your words and visuals, nothing in the way.

Built-in analytics

Free: Total pageviews. PRO: Visitors, referrers, top pages, devices, and countries. Privacy-first, no third-party scripts.

Built for AI & agents

Agent-friendly feeds

JSON, RSS, and Atom for easy ingestion.

Stable URLs & clean markup

Predictable structure for AI search engines and assistants.

Rich context

Alt text, link metadata, tags, and per-tag feeds make your content machine-readable.

Native iOS App

Coming Soon

Jot from anywhere

Full-featured native iOS app with complete dashboard parity. Create text, link, and photo jots on the go. Quick builds. Dark mode and iOS 26 Liquid Glass design.

Create jots on the go Quick builds Dark mode

Sign up for free and be the first to know when it launches.

Who it's for

What users are saying

"The tag-based feeds are genius. My customers subscribe to #features, my investors follow #metrics, and I control it all from one place."

"I tried Twitter, then Threads, then Bluesky. Each one felt like shouting into a crowd. Jottings gave me a quiet place to document my pottery journey—just me, my photos, and whoever wants to follow along."

— Maya S., Hobbyist Potter

"I use it to share seasonal updates with our extended family across three countries. No one has to download another app or create an account—they just bookmark the page."

— Priya K., Family Chronicler

"Finally, a place for my writing that isn't trying to game my attention. Chronological, clean, mine. That's all I wanted."

— James W., Freelance Writer

Simple, transparent pricing

FREE

$0 /mo

  • Unlimited text jots - Write as much as you want
  • Plain text formatting - Keep it simple
  • Tag organization - Categorize your posts
  • Basic SEO - robots.txt, basic sitemap
  • Pageview counter - See total pageviews for your site
  • Limited AI - 3 AI uses per day for writing & tag suggestions
  • 3 sites maximum - Perfect for trying out Jottings
  • *.jottings.me subdomain - Your free hosted site
  • Community support - Help via email and docs

Sign Up Free

RECOMMENDED

PRO

Starting at

$5 /mo

  • Full Markdown - Headers, lists, emphasis, code blocks, blockquotes
  • Photo attachments - Auto-resize to 1200px, optimize quality, animated GIF support
  • Link previews - Auto-fetch Open Graph metadata, titles, images
  • AI writing assistant - Draft, expand, rephrase, fix grammar, adjust tone, summarize
  • AI tag suggestions - Smart tags based on content and your existing tags
  • Custom domain - your-blog.com with free SSL certificate
  • Full analytics dashboard - Visitors, referrers, top pages, devices, countries, cities, and real-time live view
  • RSS & JSON feeds - Let readers subscribe with Feedbin, NetNewsWire, Reeder. See guide
  • Rich SEO files - ai.txt, humans.txt, site.webmanifest, enhanced sitemap
  • Social preview images - Upload site banner for Facebook, Twitter, LinkedIn, Discord sharing
  • AI search optimized - Indexed by Claude, GPT, Perplexity
  • Remove branding - Option to hide "Powered by Jottings"
  • Priority support - Faster response times
  • 5 sites with Pro features - All your sites get Pro upgrades

All plans include SSL, automatic deployments, and dark/light theme support.

How it works (2 min)

2

Create a site

with free *.jottings.me subdomain

3

Start jotting

text, images, links, Markdown, tags

4

Go custom

connect your own domain (auto SSL)

5

Done

automatic deployment, feeds, search, tag pages

How does Jottings compare?

What makes it different

No setup spiral

You're live in minutes.

No database drama

Static sites by default.

No fluff

A focused tool for publishing and finding your work, human- and machine-readable.

FAQ (short and honest)

Yes. Connect it in minutes; SSL is automatic.

Yes. Static files served from a globally distributed edge CDN—fast by default and friendly to search engines.

Yes. RSS, Atom, and JSON for your site and per-tag channels.

Yes. AI writing assistant can draft, expand, rephrase, fix grammar, adjust tone, and summarize. Undo any change with one click. It learns your writing style and suggests tags based on your content.

Yes. Clean markup, stable URLs, and JSON feeds make ingestion easy.

Yes. Use tags; each tag has its own page and feed.

Yes—light, dark, or follow system.

Yes. Free users see total pageviews; PRO unlocks visitors, referrers, devices, and countries.

From the blog

Read all articles

Ready to escape the feed?

Claim your corner of the internet. Start jotting today.

Journalism students expose Russian-linked vessels off the Dutch and German coast

Hacker News
www.digitaldigging.org
2025-12-12 08:24:22
Comments...
Original Article

Seven German journalism students tracked Russian-crewed freighters lurking off the Dutch and German coast—and connected them to drone swarms over military bases. Let me walk you through what Michèle Borcherding, Clara Veihelmann, Luca-Marie Hoffmann, Julius Nieweler, Tobias Wellnitz, Sergen Kaya, and Clemens Justus pulled off.

Just so you know, I’m familiar with them. I did a long OSINT training with them in Berlin. I can tell you: they went far beyond anything I taught them. The physical verification alone—chasing a ship across France, the Netherlands, and Belgium—that’s not something you learn in a classroom.

On the night of May 16, 2025, two ships were sitting in suspicious positions. The HAV Dolphin —flagged in Antigua & Barbuda—had been circling in Germany’s Kiel Bay for ten days. Not delivering cargo. Just loitering, 25 kilometers from defense shipyards where drone swarms had been spotted on three separate days.

Merry goes round and round and round. Based on shipping data.

Meanwhile, 115 kilometers away off the Dutch island of Schiermonnikoog, her sister ship the HAV Snapper had sailed out and parked in open water. It positioned itself exactly two hours before seven drones appeared over a Russian freighter being escorted by German police through the North Sea. It stayed there for four days and made silly circles.

That Russian freighter, the Lauga —formerly named “Ivan Shchepetov”—had visited Syria’s Tartus port the previous summer. Russia’s only Mediterranean naval base. Where Russian submarines dock.

Coincidence? That’s what the students from the Axel Springer Academy decided to find out. What followed was a five-week investigation involving leaked classified documents, tens of thousands of ship tracking data points, a 2,500-kilometer car chase across three countries, and—in a delicious bit of turnabout—their own drone flight over one of the suspect vessels.

“Wir haben zurück-gedrohnt,” they told their audience today during a presentation of the project. We droned back.The scale of the problem

Let’s start with the numbers that German authorities didn’t want public.

According to classified BKA reports obtained by the team: 1,072 incidents involving 1,955 drones in 2025 alone (as of November 19). Forty-five percent occurred in evening hours. Drone swarms flew “almost exclusively over or near military installations.”

In only 29 of 498 investigated cases could drone pilots be identified. In none of those cases were they state actors. In 88 percent of cases , authorities couldn’t even identify the drone type.

The BKA’s own assessment: “Individual incidents indicate complex operations drawing on larger financial and logistical resources .”

Translation: this isn’t hobbyists.

October 2, 2025. First drone sightings around Munich Airport at 22:10. By 22:35, the airport shut down completely. No departures, no landings. Nearly 3,000 passengers stranded overnight.

October 3: same thing. Another shutdown. This time 6,500 people affected . Four hundred emergency cots deployed. Over two days: roughly 10,000 travelers disrupted.

Economic damage to Munich Airport: 6 to 8 million euros .

By October 2025, German air traffic control had logged 192 drone incidents —a new record. Frankfurt led with 43.

The Munich prosecutor’s office is investigating “dangerous interference with air traffic.” Against unknown perpetrators.

The leaked BKA documents reveal a disturbing pattern: drones aren’t just buzzing airports. They’re systematically surveilling military installations—often during sensitive operations.

April 15, 2025: A drone overflies Werratalkaserne in Bad Salzungen at 9 PM. At that exact moment: combat vehicles were being delivered and stored for transport to Ukraine .

May 17-21, 2025: During the Bundeswehr exercise “Gelber Merkur” across four German states, more than 80 drone sightings over military sites and defense contractors. Almost exclusively in evening hours.

January 28-29, 2025: Multiple drones spotted over Schwesing airfield. At the time, Ukrainian soldiers were being trained there .

Q1 2025: Persistent overflights of US Air Base Ramstein .

June 4, 2025: Photo reconnaissance of Wilhelmshaven Naval Arsenal.

January 8, 2025: Drones flying in formation over a naval aviation squadron base in Nordholz.

It’s not just military bases.

February-March 2025: Three evening overflights of the LNG terminal in Stade. Two incidents at the adjacent seaport. Seven incidents over Wilhelmshaven Naval Arsenal. One pilot was identified: a 69-year-old German. No evidence of state involvement in that case.

May 4, 2025: A drone flew over Biblis nuclear power plant for five to ten minutes . An unknown vehicle drove up to the open facility gate. Search unsuccessful.

January 13, 2025: Multiple drones spotted over a chemical company in Marl starting at 8 PM. No drone models identified. No suspicious persons or vehicles found. Authorities concluded: “ Coincidence is excluded .”

They started with official channels. What they found was instructive.

The team obtained an internal email showing that all state interior ministries were instructed to give uniform responses to questions about drone sightings. Coordinated stonewalling.

“We were trapped in a bureaucratic maze,” Michèle Borcherding told the audience. “The federal states closed ranks. Nobody felt responsible. We got passed from one spokesperson to the next, and it felt like the left hand didn’t know what the right hand was doing.”

But here’s the thing that made them dig harder: “The answers we got were nearly identical. That everyone blocked us like that triggered the thought: something’s there .”

So they kept digging. And obtained the classified BKA reports that authorities didn’t want public.

Clara Veihelmann led the ship tracking analysis. They used Global Fishing Watch (globalfishingwatch.org) to pull AIS data—the automatic identification system that broadcasts ship positions.

“We analyzed tens of thousands of data points,” Veihelmann explains. “We looked closely at ship data in the North Sea and Baltic Sea.”

They were looking for anomalies. Cargo ships move from A to B. Efficiency matters. So when the team saw a ship track that looked like someone had scribbled furiously with a purple marker—loops, circles, ten days of chaos in Kiel Bay instead of a clean transit line—the audience laughed at how absurd it looked.

Experts confirmed: absolutely not normal for a coastal freighter that’s supposed to be making money delivering cargo.

Specifications: 88.32m long, 12.50m beam. Built 1993. Flag: Antigua & Barbuda. Owner: HAV Shipping AS (Norway).

The HAV Dolphin is “the most striking example of all,” the team concluded.

From late March to late April 2025, the ship spent nearly a month at the Pregol Shipyard in Kaliningrad —a facility with documented ties to the Russian military and Rosatom.

Then it sailed to Kiel Bay. May 1-10: the ship sat there for nine days, exhibiting chaotic movement patterns. During exactly this period, drones were spotted on three separate days over defense shipyards 25 kilometers away. Observers noted the drones appeared to come from the direction of the Baltic Sea .

The BKA’s own classified report notes: “A container ship under the flag of Antigua & Barbuda had been anchoring in Kiel Bay since May 1, 2025, with a recent extended stay in Russia .”

The HAV Dolphin has been inspected three times by German and Dutch authorities. Each time: nothing found.

But here’s what the team learned from security sources: those inspections were “ superficial “ and “ symbolic in nature .” Not all containers were opened. You can’t properly search a freighter like that without many investigators and more time, they were told.

Classified security documents reveal another detail: “During personnel inspection, an additional watch officer was found on board who behaved conspicuously during the inspection.” The captain claimed he was there for training purposes.

The crew? Entirely Russian .

And another incident: In early June, a drone was spotted inside the military security zone at a naval radio station between Ramsloh and Rhauderfehn—used for communication with German submarines and NATO ships. It flew over for about two minutes before disappearing. Seventy kilometers away, the HAV Dolphin had been anchored in the Ems estuary for three days.

Specifications: 102.50m long, 16.40m beam. Built 1994. Former name: “Ivan Shchepetov.” Flag: Russia. Owner: Idan Shipping Company, St. Petersburg.

Night of May 16-17, 2025. The Lauga transits through the North Sea, escorted by the German Federal Police vessel Potsdam.

At approximately 1 AM, the police vessel reports: “ Seven drones detected around the deployment ship .” The drones circle both the Russian freighter and the German police ship for hours. Three eventually depart. Four keep circling. Eventually, the police vessel breaks off its escort.

Belgian customs later searched the Lauga in Zeebrugge. All eleven crew members— Russian citizens to a man —were questioned. No drones found.

But the Lauga’s history is telling. Summer 2024: the ship called at Tartus, Syria —Russia’s only naval base in the Mediterranean, where Russian submarines dock.

After the Belgian search, the Lauga sailed to St. Petersburg and docked at the Petrolesport terminal—owned by Delo Group, which is 49% owned by Rosatom , Russia’s state nuclear corporation.

The ship’s owner, Idan Shipping Company, is run by Andrei Selyanin. Documents show Selyanin’s other companies openly advertised working for Rosatom in 2024. The team found evidence of earlier Rosatom connections in additional documents.

Specifications: 88.16m long, 12.50m beam. Built 1991. Flag: Bahamas. Owner: HAV Shipping AS (Norway)—same company as the HAV Dolphin.

On the evening of May 16, the HAV Snapper sailed out to a position off the Dutch island of Schiermonnikoog. Two hours before the first drones were spotted over the Lauga and Potsdam , she took up station. She stayed for four days.

Distance from the Lauga incident: 115 kilometers . Within drone range.

The HAV Snapper was serviced at the Pregol Shipyard in Kaliningrad from August 4-29, 2023—the same facility where the HAV Dolphin spent nearly a month before the Kiel incidents.

HAV Shipping told the journalists that Pregol is “a reputable shipyard used by many European shipping companies.” Perhaps. But Pregol has documented connections to both the Russian military and Rosatom.

The team traced ownership chains, maintenance records, corporate connections. Multiple threads led to Rosatom —Russia’s state nuclear corporation, responsible for nuclear weapons and the submarine program. Rosatom also operates the Atomflot fleet of nuclear-powered icebreakers.

Then they found a 2024 Rosatom presentation. It showed an orange-and-blue drone on the helipad of a massive red icebreaker in an Arctic landscape.

Rosatom’s drone specifications:

• Speed: 140 km/h

• Range: at least 200 kilometers

• Maximum altitude: 2.5 kilometers

• Equipment: video cameras, thermal imaging

Can launch and land on ships

Officially, Rosatom uses these drones for Arctic sea route surveillance. Conditions in the North Sea and Baltic? Far more favorable than the Arctic.

The HAV Dolphin was 25 kilometers from the Kiel drone sightings. The HAV Snapper was 115 kilometers from the Lauga incident. The naval radio station incident: 70 kilometers from the HAV Dolphin.

All well within Rosatom drone range.

“At this point it was clear to us,” Borcherding says. “We don’t just want to see these ships as data on our screen. We have to get one of them in front of our own eyes.”

They tracked the HAV Dolphin to a French Atlantic port. Called the harbor authority. Got confirmation: the ship would stay until 7 PM the next day.

They flew to Paris. Rented a car. Drove five hours to the coast.

The ship was gone.

What followed was a 2,500-kilometer pursuit from France through the Netherlands to Belgium. The HAV Dolphin was “unpredictable.” Left ports early. Sometimes crawled. Then suddenly sped up. Changed destination data. Then deleted its destination entirely.

It never reached Antwerp. Instead, it parked on a sandbank off the Belgian coast near Ostende. Then it started circling—25 kilometers offshore, directly in front of a Belgian military base .

The team finally caught up.

They flew their own drone over the HAV Dolphin.

From the air: 88 meters long, 12 meters wide. Open cargo hold with grid covers, typical for multi-purpose freighters. German security sources confirmed what they’d suspected: the crew was exclusively Russian sailors .

The Federal Office for the Protection of the Constitution (Germany’s domestic intelligence agency) told the team:

“Russian intelligence services deploy so-called low-level agents for espionage, sabotage, and other disruptive measures. These ‘pocket money agents’ or ‘disposable agents’ operate for small sums in the interest of hostile intelligence services without belonging to them. They’re used for comparatively simple operations. Unlike regular staff, they’re expendable—exposure is accepted as a cost of doing business.”

The profile: young men with unstable social circumstances and financial difficulties. Recruited online via messenger services and social media. Financially motivated, sometimes ideologically aligned with Russia.

The Munich prosecutor found no evidence linking airport drone incidents to disposable agents. But the intelligence service confirms the tactic exists and is actively used.

European intelligence services assess the three documented ships as operating “ with high confidence “ on behalf of Russian interests. Their movement profiles are “very conspicuous” and show “ little evidence of commercial activity .”

The German Interior Ministry’s official response: “German security authorities are aware of the ‘shadow fleet’ phenomenon. The issue is being continuously addressed within respective legal jurisdictions. (...) Reports of drone overflights remain consistently high. (...) Involvement of foreign state entities in a non-quantifiable portion of drone overflights is to be presumed .”

“Non-quantifiable portion.” “To be presumed.” That’s intelligence-speak for: we know it’s happening, we can’t prove exactly how much, but we’re not going to say that publicly.

The team’s final tally: they could draw 19 temporal and geographic correlations between drone sightings and the positions of the three ships.

When drones appeared over northern Germany, a pattern emerged: often, multiple ships exhibited suspicious behavior simultaneously. Movement data showed suspicious patterns over multiple days.

HAV Shipping CEO Petter Kleppan responded to the team’s findings: “HAV has exclusively large, established European companies as customers. We transport dry goods from port to port—steel, curbs, grain, scrap. Typical invoice: about €50,000. We have no Russian customers and generate no revenue from Russia. HAV has ‘self-sanctioned’—we don’t transport goods to or from Russian customers, and we don’t work with Russian brokers.”

“Our trail leads to Russia,” the team concludes. “Not beyond doubt, but it’s currently the most probable explanation. We systematically laid both things side by side: the secret reports about drone incidents and the routes of the ships. You can at least recognize a pattern.”

They didn’t find a drone on any ship. They can’t prove causation. What they established:

• Ships with Russian crews exhibited anomalous behavior near German military installations

• Multiple ships from the same owner positioned suspiciously during the same incidents

• These ships have connections to Russian military-linked facilities (Pregol, Rosatom, Tartus)

• 19 correlations between ship positions and documented drone incidents

• Russia’s state nuclear corporation operates ship-launched drones with sufficient range

• Official inspections were “symbolic”—not all containers opened

• European intelligence assesses these ships as “with high confidence” working for Russia

When the presentation ended, the boss of the paper in the front row spoke up. “I’ve rarely seen anything this good,” she said. “I hope you’re sufficiently proud of yourselves. This is really outstanding. I have goosebumps.”

She was right. Seven students, five weeks, publicly available ship tracking tools, and a willingness to drive 2,500 kilometers on a hunch. They produced a more coherent picture of Germany’s and Netherlands drone mystery than months of official hand-wringing and coordinated stonewalling.

The ships are still sailing. And you can watch them. The AIS data is still updating. Go find them.

As I write this (December 11, 2025), the HAV Snapper is in the Aegean Sea, underway from Volos to Thessaloniki, Greece. Speed: 8 knots. Course: 327°. Draught: 4.3 meters. Expected arrival: 07:30 local time.

I pulled this from MarineTraffic three minutes ago. You can do the same.

HAV Snapper identifiers for tracking:

• IMO: 9001813

• MMSI: 311014800

• Call sign: C6XN4

• Flag: Bahamas

• AIS transponder: Class A

Free tracking tools:

MarineTraffic.com — Search by vessel name or IMO number. Free tier shows current position; paid tiers show historical tracks.

VesselFinder.com — Similar functionality, good for cross-referencing.

Global Fishing Watch (globalfishingwatch.org) — What the Axel Springer team used. Better for historical analysis and bulk data.

Kpler — Professional-grade commodity and shipping intelligence. Subscription required but powerful.

The IMO number is key—it’s the ship’s unique identifier that doesn’t change even if the vessel changes names, flags, or owners. The Lauga used to be the “Ivan Shchepetov”. Same IMO number.

Set up alerts. Watch for anomalous behavior. When a cargo ship that should be moving from A to B starts circling for days in the Baltic, you’ll know.

That’s the point of OSINT: find the story in public data.

Below, members only, full presentation.

Smartphone Without a Battery (2022)

Hacker News
yaky.dev
2025-12-12 07:36:17
Comments...
Original Article

How to wire and run an old smartphone without a battery.

Intro

I have an old Samsung Galaxy S5 that I wanted to use to run my 3D printer. There is a great project called octo4a that runs OctoPrint on Android devices.

octo4a

Using an old smartphone for OctoPrint is a perfect fit - it has USB OTG support to connect to the printer, WiFi to access the controls and upload models, and a camera to monitor the print progress. The only, yet critical, issue is that the kernel for Galaxy S5 does not support charging while it's connected to a USB device. The battery is old and worn, and cannot last through several hours of printing.

The approach I took was to build a "fake battery" circuit that emulates the battery, but is powered by 5V via USB.

Process

First, I removed the battery. Luckily, this smartphone is old enough to have user-serviceable battery compartment.

Older batteries might have only two terminals, (+) and (-). Newer batteries might have three or four terminals. I measured voltage and resistance between all terminals to find out what they might be.

battery-pins.jpg

These are the results:

  • Terminals (+) and (-) are obvious. Generally, Li-ion battery will produce 3.4V when almost-empty and 4.2V when full. The battery also says "CHARGE VOLTAGE 4.4V", so that voltage level at the battery terminal would not cause any issues with the smartphone.
  • The second terminal is the thermistor (T), used to get the approximate temperature of the battery. The resistance between (T) and (-) is around 2350ohm at room temperature. Being a safety feature, my smartphone will not start at all if the thermistor terminal is not connected.
  • The fourth terminal is most likely used for NFC (which is a part of the battery). It's not going to be used.

So what I need my "fake battery" circuit to do is:

  • Provide 3.4-4.4V between (+) and (-) battery terminals on the smartphone
  • Create ~2350ohm resistance between (T) and (-)

Since I am still powering the smartphone from a 5V USB power supply, I can add a single silicon diode between the power supply's +5V and (+) battery terminal to drop the incoming voltage by ~0.7V down to ~4.3V, which is within the range that the smartphone can expect.

IMPORTANT: Voltage drop across the silicone diode depends on the current, and multimeter measurements might not be accurate. See "Diode voltage drop" at the bottom of the page.

To get ~2350ohms, I connected a 2200ohm and a 150ohm in series. A variable resistor of appropriate range might work too.

This setup works, and the smartphone powers up. However, it can draw plenty of current and needs a fairly powerful power supply. Basic phone charger <1A USB power supply was not enough to even finish booting, but a ~2A was enough to boot and launch octo4a. Even after booting, if the smartphone is under higher load (initializing the USB connection, using the camera), it draws more current, and will abruptly power off if the current draw exceeds what power supply can provide.

To help address these issues:

  • Add a 1000uF capacitor between (+5V) and (ground) on the circuit. See "Capacitors" at the bottom of the page for warnings and ideas.
  • Turn on the Battery Saver in Android and set it up to automatically turn on if estimated battery level is below 75% (just in case). This seems to lower the processing power enough to avoid shutdowns.
  • In OctoPrint settings, either disable the camera entirely, or use it at the lower-end FPS and resolution (320x240 @5FPS). Camera has occasionaly caused octo4a to crash, so I ended up disabling it anyway.

Schematic

In ASCII:

(+5V)
  |
  |          silicon diode
  +---------------|>|------(+)
  |
  |        +---[2350ohm]---(T)
  | 1000uF |
  +---||---+---------------(-)
           |
        (ground)

As an image:

fake-battery-schematic.jpg

Result

I cut and sanded a piece of flooring to fit into the battery compartment. Then, built and soldered the circuit on a through-hole prototype board and mounted it on top of that with double-sided tape. The 4 pins on the top are bent inward, fit into small grooves in the base and make good contact with the springy terminals of the battery compartment. (I could have just soldered the wires directly to the terminals, but wanted to make this device removable just in case I would ever need the battery again). Power comes through the JST connector at the bottom. (I salvaged a long USB cable with a broken micro-USB plug, so I decided to use JST connectors instead of USB for power).

wooden-battery.jpg

3D printer (old RepRapPro) in its enclosure (storage box) and its brain (Samsung Galaxy S5) in a 3D-printed holder. Super modern and high-tech, I know.

3d-printer-setup.jpg

This setup has been working fairly well for me for a few weeks now.

Additional info

Diode voltage drop

Voltage drop across the diode depends on the current. If you connect the diode, but do not connect the circuit to anything, and try to measure the "final" voltage (after the diode) with a multimeter, you will get somewhere around ~4.8V, because the current running through the multimeter is miniscule. You might think you need 4 diodes to bring the voltage down sufficiently, but you do not, one is enough.

Diagrams:

Circuit without load, measured with a multimeter:

(+5V)
  |  diode
  +---|>|---+ <--+
                 |
          [multimeter: 4.8V]
                 |
  +---------+ <--+
  |
(ground)

Circuit with load (connected to the smartphone):

(+5V)
  |  diode
  +---|>|---+ <-------------+
            |               |
       [smartphone]  [multimeter: 4.2V]
            |               |
  +---------+ <-------------+
  |
(ground)

Capacitors

It might be possible to use a lower-current power supply by replacing the 1000uF capacitor with several "supercapacitors" (5F and above). The only issue I see is that the initial power-up will stress the power supply since empty capacitors are essentially a short circuit.

References

Adafruit - about Li-ion and LiPoly batteries


The tiniest yet real telescope I've built

Hacker News
lucassifoni.info
2025-12-12 07:35:49
Comments...
Original Article

The tiniest yet real telescope I've built


A “relaxation” project, mostly drawn on planes to and from Norway this month, where I had to travel to setup a digital art installation in Kristiansand with friends from the digital art collective Lab212. It has been drawn with one major constraint: it must fit in the inner pocket of my jacket (well, one specific jacket), except for the rods.

This is a 3D-printed dobsonian telescope built around a 76mm/300mm parabolic mirror kit. While there are plenty of mini-scope models on the internet, I wanted something that looked like a dobson that went a bit too hard through the clothes dryer, but without compromise on what matters:

  • Balance
  • Smooth movements
  • Rigidity
  • Collimatable
  • Focusable eyepiece holder
  • A minimum of style (entirely subjective)

Hardware

  • PETG-CF filament
  • 4mm carbon rods
  • M3 screws and M3x4.5x4.5 heat-set inserts
  • A spring
  • Nylon screws to collimate both the primary and secondary mirrors
  • 4 magnets for the secondary
  • A bit of paraffin to lubricate the focuser
  • A lycra light shroud that also helps with delaying dew forming on the mirrors

The focuser follows Analog Sky’s recipe: the tube that receives the eyepiece is also the movement itself, with a rounded thread that prints extremely smoothly with very little play. No additional hardware needed - the eyepiece is self-held by the flexion of plastic fins.

All the holes for the rods are straight, which forces them to arch, which “locks” the structure in place.

The alt/az movements use “teflon pads” (actually gray HDPE or UHMW for furniture feet) with rubber backing, scalped and glued.

Download the 3D files on Printables Discussion on Astrosurf

If you build it, the real trick for ease of mounting is to chamfer the carbon rods with a 1mm chamfer at both ends and seal it with CA glue. See the chamfer pic in the gallery.

Optical tests

Sadly, the results aren’t great. We were used to very good l/6 or better recent buys from Aliexpress, but this one is very overcorrected. It was very smooth, with a rather good edge at the foucault, but is overcorrected by 70%. With the eyepiece I selected, putting it at 30x power, this does not show too much, and it retains its “real telescope” status. But this mirror is so small that I will not refigure it – the realuminizing costs would outweigh the entire project.

Edit as of dec. 11th : of course I did not resist re-figuring it. It now hovers around 0.9 strehl. The star test with the selected eyepiece shows nice symmetric defocused stars and I can now cound individual spider web strands and distinguish the dew droplets it carries, on a nearby electrical pole, whereas I did not even see the spider web with the mirror as it was from the factory. I still need to do a proper “showable” Bath report with enough interferograms, my last test was 4 interferograms and carries a ton of noise. So it is great but I now have to get it coated, and working a mirror this small did raise a few challenges in handling it.

All test pictures below are before refiguring

MKVCinemas streaming piracy service with 142M visits shuts down

Bleeping Computer
www.bleepingcomputer.com
2025-12-12 07:14:31
An anti-piracy coalition has dismantled one of India's most popular streaming piracy services, which has provided free access to movies and TV shows to millions over the past two years. [...]...
Original Article

Pirate

An anti-piracy coalition has dismantled one of India's most popular streaming piracy services, which has provided free access to movies and TV shows to millions over the past two years.

Backed by over 50 major television networks and film studios, including Disney, Warner Bros, Netflix, Paramount, Sony Pictures, and Universal Pictures, the Alliance for Creativity and Entertainment (ACE) focuses on shutting down illegal streaming services through criminal referrals, civil litigation, and cease-and-desist operations.

ACE's latest action resulted in the shutdown of the MKVCinemas piracy network and 25 related domains, which attracted over 142.4 million visitors between 2024 and 2025.

ACE identified the operator of the piracy platform in Bihar, India, who agreed to cease operations and transfer control of all associated domains. All MKVCinemas sites now redirect visitors to ACE's "Watch Legally" portal.

As part of the same action, it also shut down a widely used file-cloning tool that allowed users across India and Indonesia to distribute copyrighted content by copying files directly from hidden cloud repositories into their personal cloud storage.

This tool drew 231.4 million visits over the past two years and helped evade takedown efforts by concealing the source of media files uploaded to the cloud drives.

"Our actions make clear that ACE will relentlessly pursue and dismantle illegal operations so audiences and creators can benefit from a secure, sustainable marketplace," said Larissa Knapp, Executive Vice President at the Motion Picture Association (MPA).

ACE redirect banner
ACE redirect banner (BleepingComputer)

​Last month, ACE and DAZN also took down Photocall , a major TV digital piracy service that provided unauthorized access to 1,127 TV channels to over 26 million users annually, including live sports content.

A separate joint law enforcement operation coordinated by Europol disrupted more piracy streaming services in November after identifying 69 sites with over 11.8 million annual visitors.

The authorities also initiated 44 new investigations after connecting $55 million (over €47 million) in cryptocurrency to illegal streaming services and referring 25 illegal IPTV services to cryptocurrency providers for disruption.

In recent years, ACE has targeted a string of large-scale illegal streaming networks in joint operations with law enforcement organizations, including Europol, Interpol, and the U.S. Department of Justice.

Since the start of 2025, its efforts have also led to the shutdown of Streameast , one of the world's largest illegal live sports streaming networks, and Rare Breed TV , an IPTV piracy platform with over 28,000 channels and more than 100,000 movies and series.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Freeing a Xiaomi Humidifier from the Cloud

Lobsters
0l.de
2025-12-12 06:15:48
Comments...
Original Article
Home Assistant Logo

I recently moved into a new apartment which I used as an opportunity to make our home a little smarter. As a big open source supporter I built my smart home platform with Home Assistant of course.

Unfortunately, there are still far too few products that are directly compatible with Home Assistant. Especially in the area of humidifiers where I only found products that rely on a proprietary app or cloud from the manufacturer. Something that I would like to avoid at all costs. For one thing, such dependence is a certain form of planned obsolescence , as the product becomes useless as soon as the app loses its compatibility with new smartphone operating system versions or the manufacturer’s cloud is no longer operated.

Therefore, it was important for me to find a smart humidifier that integrates directly with my Home Assistant setup. To achieve this goal, I identified two options:

  1. Add sensors / actuators to a classic humidifier to make it smart.
  2. Replace the firmware of a smart humidifier with my own source code.

I decided to use the second approach, because it required less effort, since I would have had to implement my own firmware anyway.

ESPHome Logo

Next, I was faced with the task of finding a suitable humidifier whose firmware I could easily replace. I specifically looked for devices that contained an ESP8266 or ESP32 microcontroller from Espressif , because for these I could easily create a new firmware with ESPHome .

ESPHome is a system that allows you to control your ESP8266/ESP32 through simple but powerful configuration files and remotely control it through home automation systems.

Xiaomi Mi Smart Antibacterial Humidifier
Xiaomi Mi Smart Antibacterial Humidifier.

Thanks to Sören Beye /Hypfer , I quickly became aware of the Xiaomi Mi Smart Antibacterial Humidifier , as Sören himself wrote his own firmware for this humidifier.

Unfortunately, his original version of the customized firmware ( /Hypfer/esp8266-deerma-humidifier ) is no longer compatible with the current version of the producr, as Xiaomi has modified the internal communication protocol.

Therefore, I decided to re-implement the firmware based on ESPHome as an “external component”.

You can find the code for this component here: /stv0g/esphome-config/components/xiaomi_deerma_humidifier

This section is going to walk you through the process of modifying your own Xiaomi Humidifier.

  1. Find the correct model

    The internal Mi Model ID of the supported device is deerma.humidifier.jsq . The Model on the packaging being ZNJSQ01DEM .

  2. Disassembly

    There are 4 Philips-head screws hidden under the rubber foot ring which can be easily pealed of. Its usually find to remove the rubber only in areas where the screws are located. That way you can easily reattach it later.

    In the inside, you will find a small Wifi module attached to the back of the housing. Remove the module and solder on the wires as shown in the picture below.

  3. Wire UART

    Wifi board featuring an ESP-WROOM-02 module
    Wifi board featuring an ESP-WROOM-02 module.

    In the last picture, the colors of the wires correspond to the following:

    • Orange: GND
    • Grey: VCC (3.3 Volt!)
    • Yellow: GPIO0
    • Brown: RX
    • Red: TX

    In order to flash the module, you will need to tie GPIO0 to GND and attach a 3.3V serial adapter to the RX , TX pins. I recommend to also to disconnect the module from the humidifier and power it via your Serial to USB adapter via the VCC pin.

    To flash the ESP8266 chip you can either use ESPHome’s build-in web flasher or esptool.py .

  4. Create backup of original firmware

    But before we flash the new firmware, I recommend to first make a backup of the original Xiaomi firmware by running:

    esptool.py \

    --chip esp8266 \

    --baud 230400 \

    --port /dev/tty.usbserial-31310 \

    read_flash 0x0 0x200000 xiaomi-deerma-humidfier-original-2mb.bin

  5. Flash new firmware

    Afterwards, we can flash the new firmware:

    esptool.py \

    --chip esp8266 \

    --baud 230400 \

    --port /dev/tty.usbserial-31310 \

    write_flash 0x0 deerma.bin

Guarding My Git Forge Against AI Scrapers

Lobsters
vulpinecitrus.info
2025-12-12 05:56:21
Comments...
Original Article

In August 2024, one of my roommates and partners messaged the apartment group chat, saying she noticed the internet was slow again at our place, and my forgejo was unable to render any page in under 15 seconds.

i investigated, thinking it would be a trivial little problem to solve. Soon enough, however, i would uncover hundreds of thousands of queries a day from thousands of individual IPs, fetching seemingly-random pages in my forge every single day, all the time.

This post summarizes the practical issues that arose as a result of the onslaught of scrapers eager to download millions of commits off of my forge, and the measures i put in place to limit the damage.

# Why the forge?

In the year 2025, on the web, everything is worth being scraped. Everything that came out of the mind of a human is susceptible to be snatched under the vastest labor theft scheme in the history of mankind. This very article, the second it gets published in any indexable page, will be added to countless datasets meant to train foundational large-language models. My words, your words, have contributed infinitesimal shifts of neural-network weights underpinning the largest, most grotesque accumulation of wealth seen over the lifetime of my parents, grandparents, and their grandparents themselves.

Oh, and forges have a lot of commits. See, if you have a public repository that is publicly exposed, every file in every folder for every commit will be connected. Add other options, such as a git blame on a file, and multiply it by the number of files and commits. Add the raw download link, also multiplied by the number of commits.

Say, hypothetically, you have a linux repository available, and only with all the commits in the master branch up to the v6.17 tag from 2025-09-18. That's 1,383,738 commits in the range 1da177e4c3f4..e5f0a698b34e . How many files is that? Well:

count=0;
while read -r rev; do
    point=$(git ls-tree -tr $rev | wc -l);
    count=$(( $count + $point ));
    printf "[%s] %s: %d (tot: %d)\n" $(git log -1 --pretty=tformat:%cs $rev) $rev $point $count;
done < <(git rev-list "1da177e4c3f4..e5f0a698b34e");
printf "Total: $count\n";

i ran this on the 100 commits before v6.17 . If you have git ls-tree -tr $rev , you get both files and directories counted. If you replace it with git ls-tree -r $rev only shows files. i got 72024729 files, and 76798658 files and directories. Running on the whole history of Linux's master branch yields 78,483,866,182 files, and 83,627,462,277 files and directories.

Now, for a ballpark estimate of the number of pages that can be scraped if you have a copy of Linux, apply the formula:

(Ncommits * Nfiles) * 2 + (Ncommits * Nfilesandfolders) * 2 + Ncommits * 3

That is, applied to my hypothetical Linux repository:

78483866182 * 2 + 83627462277 * 2 + 1383738 * 3 = 324,226,808,132 pages

The *3 accounts for the fact that every file of every commit can be scraped raw, and git-blame 'd. The second part of the formula considers every single file or folder page. The third part accounts for the fact that every file of every commit can be diffed with its version of every commit (in theory). The final component considers every commit summary page.

That gives, for me, 324 billion 226 million 808 thousand and 132 pages that can be scraped. From a single repository. Assume that every scraper agent that enters one of these repositories will also take note of every other link on the page, and report it so that other agents can scrapes them. These scrapers effectively act like early 2000s web spiders that crawled the internet to index it, except they do not care about robots.txt , and they will absolutely keep scraping new links again and again with no strategy to minimize the cost on you, as a host.

# The Cost of Scraping

As i am writing the original draft of this section, the longer-term measures i put in place have been removed, so i could gather up-to-date numbers to display how bad the situation is.

i pay for the electricity that powers my git Forge. Okay, actually, one of my roommate does, but we put it on the calc sheet where we keep track of who pays what (when we remember) .

At the time i began fighting scrapers, my git forge ran from an old desktop computer plugged in my living room. Now, it is in our home's rackable server in a virtual machine. i never got to measure differences in power consumption when we got scraped or not scraped on the desktop machine, but i did on the rackable server. If memory serves me right, stopping the wave of scrapers reduced the power draw of the server from ~170W to ~150W.

Right now, with all the hard drives in that server spinning, and every protection off, we are drawing 200W from the power grid on that server. Constantly. By the end of this experiment, me and my roommates will have computed that the difference in power usage caused by scraping costs us ~60 euros a year.

Another tied cost is that the VM that runs the forge is figuratively suffocating from the amount of queries. Not all queries are born equal as well: requests to see the blame of a file or a diff between commits incurs a worse cost than just rendering the front page of a repository. The last huge wave of scraping left my VM at 99+% usage of 4 CPU cores and 2.5GiB of RAM , whereas the usual levels i observe are closer to 4% usage of CPUs, and an oscillation between 1.5GiB and 2GiB of RAM.

As i'm writing this, the VM running forgejo eats 100% of 8 CPU cores.

Additionally, the networking cost is palpable. Various monitoring tools let me see the real-time traffic statistics in our apartment. Before i put the first measures in place to thwart scraping, we could visibly see the traffic coming out of the desktop computer running my forge and out to the internet. My roommates' complaints that it slowed down the whole internet here were in fact founded: when we had multiple people watching live streams or doing pretty big downloads, they were throttled by the traffic out of the forge. 1

The egress data rate of my forge's VM is at least 4MBps of data (32Mbps). Constantly.

Finally, the human cost: i have spent entire days behind my terminals trying to figure out 1) what the fuck was going on and 2) what the fuck to do about it. i have had conversations with other people who self-host their infrastructure, desperately trying to figure out workable solutions that would not needlessly impact our users. And the funniest detail is: that rackable server is in the living room, directly in front of my bedroom door. It usually purrs like an adorable cat, but, lately, it's been whirring louder and louder. i can hear it. when i'm trying to sleep .

# Let's do some statistics.

i was curious to analyze the nginx logs to understand where the traffic came from and what shape it took.

As a study case, we can work on /var/log/nginx/git.vulpinecitrus.info/ from 2025-11-14 to 2025-11-19 . Note that on 2025-11-15 at 18:27 UTC , i stopped the redirection of new agents into the Iocaine crawler maze (see below). At 19:15 UTC , i removed the nginx request limit zone from the /Lymkwi/linux/ path. At 19:16 UTC i removed the separation of log files between IPs flagged as bots, and IPs not flagged as bots.

The three measures i progressively put in place later were: web caching (2025-11-17), manually sending IPs to a garbage generator with a rate-limit (Iocaine 2) (2025-11-14, 15 and 18), and then Iocaine 3 (2025-11-19).

Common Logs Successful Delayed (429) Error (5XX) Measures in place
2025-11-14 275323 66517 0 Iocaine 2.1 + Rate-limiting
2025-11-15 71712 54259 9802 Iocaine 2.1 + Rate-limiting
2025-11-16 140713 0 65763 None
2025-11-17 514309 25986 3012 Caching, eventually rate-limiting 2
2025-11-18 335266 20280 1 Iocaine 2.1 + Rate-limiting
2025-11-19 3183 0 0 Iocaine 3
Bot Logs Successful Delayed (429) Error (5XX) Measures in place
2025-11-14 (bots) 41388 65517 0 Iocaine 2.1 + Rate-limiting
2025-11-15 (bots) 34190 53403 63 Iocaine 2.1 + Rate-limiting
2025-11-16 (bots) - - - (no bot-specific logs)
2025-11-17 (bots) - - - (no bot-specific logs)
2025-11-18 (bots) 390013 0 13 Iocaine 2.1 + Rate-limiting
2025-11-19 (bots) 731593 0 0 Iocaine 3

Table 1: Number of Queries Per Day

(Commands used to generate Table 1)

Assuming your log file is git-access-2025-11-14.log.gz :

zcat git-access-2025-11-14.log.gz | grep '" 200 ' | wc -l
zcat git-access-2025-11-14.log.gz | grep '" 429 ' | wc -l

Without spoiling too much, caching was an utter failure, and the improvement i measurement by manually rate-limiting a set of IPs (from Huawei Cloud and Alibaba) on the Linux repository only helped so much. When all protections dropped, my server became so unresponsive that backend errors (usually timeouts) spiked. Error also happened with caching, when nginx encountered an issue when buffering a reply. Overall, caching encouraged more queries overall.

Once Iocaine was deployed, the vast majority of queries were routed away from the backend, with no errors reported, and no delaying because all of the IPs i manually rate-limited were caught by Iocaine instead.

Out of all these queries, 117.64.70.34 is the most common source of requests, with 226023 total queries originating from the ChinaNet-Backbone ASN (AS4134). It is followed by 136.243.228.193 (13849 queries), an IP from Hetzner whose hostname ironically resolves to crawling-gateway-136-243-228-193.dataforseo.com . Then, 172.17.0.3 the uptime prober of VC Status with 6908 queries, and 74.7.227.127 , an IP from Microsoft's AS 8075 (6117 queries).

Day Unique IP Count
2025-11-14 16461
2025-11-15 18639
2025-11-16 41712
2025-11-17 47252
2025-11-18 22480
2025-11-19 14230

Table 2: Grand Total of Unique IPs Querying the Forge

(Commands used to generate Table 2)

Assuming your log files are called *git-access-2025-11-14.log.gz :

zcat \*git-access-2025-11-14.log.gz | awk '{ print $1 }' | sort | uniq -c | wc -l

On the two days where restrictions were lifted or there was only caching, the amount of unique IPs querying the forge doubled. The more you facilitate the work of these crawlers, the more they are going to pound you. They will always try and get more out of your server than you are capable of providing.

Day Top 1 Top 2 Top 3 Top 4 Top 5
2025-11-14 (226089) - /reibooru/reibooru (40189) - /Lymkwi/linux (1454) - / (1405) - /rail (1174) - /Soblow/indi-hugo
2025-11-15 (35163) - /Lymkwi/linux (18952) - /vc-archival/youtube-dl (4197) - /vc-archival/youtube-dl-original (1655) - /reibooru/reibooru (1635) - /Lymkwi/gr-gsm
2025-11-14 (bots) (40189) - /Lymkwi/linux (270) - /oror/necro (79) - /Lymkwi/[REDACTED] 3 (55) - /vc-archival/youtube-dl (52) - /oror/asm
2025-11-15 (bots) (32895) - /Lymkwi/linux (260) - /oror/necro (193) - /Lymkwi/gr-gsm (95) - /Lymkwi/[REDACTED] 3 (48) - /alopexlemoni/GenderDysphoria.fyi
2025-11-16 (72687) - /vc-archival/youtube-dl (23028) - /Lymkwi/linux (16779) - /vc-archival/youtube-dl-original (5390) - /reibooru/reibooru (3585) - /Lymkwi/gr-gsm
2025-11-17 (361632) - /vc-archival/youtube-dl (74048) - /vc-archival/youtube-dl-original (18136) - /reibooru/reibooru (13147) - /oror/necro (12921) - /alopexlemoni/GenderDysphoria.fyi
2025-11-18 (227019) - /vc-archival/youtube-dl (46004) - /vc-archival/youtube-dl-original (12644) - /alopexlemoni/GenderDysphoria.fyi (12624) - /reibooru/reibooru (7712) - /oror/necro
2025-11-18 (bots) (261346) - /vc-archival/youtube-dl (43923) - /vc-archival/youtube-dl-original (20195) - /alopexlemoni/GenderDysphoria.fyi (18808) - /reibooru/reibooru (10134) - /oror/necro
2025-11-19 (1418) - / (1248) - /rail (356) - /Soblow (31) - /assets/img (25) - /Soblow/IndigoDen
2025-11-19 (bots) (448626) - /vc-archival/youtube-dl (73164) - /vc-archival/youtube-dl-original (39107) - /reibooru/reibooru (37107) - /alopexlemoni/GenderDysphoria.fyi (25921) - /vc-archival/YSLua

Table 3: Top 5 Successful Repo/Account/Page Hits Per Day

(Commands used to generate Table 3)

Assuming you want data for the log file called git-access-2025-11-14.log.gz :

 zcat git-access-2025-11-14.log.gz | grep '" 200 ' | awk '{ print $7 }' \
    | cut -d/ -f -3 | sort | uniq -c | sort -n \
    | tail -n 5 | tac

Big repositories with a lot of commits and a lot of files are a bountiful resource for the crawlers. Once they enter those, they will take ages to leave, at least because of the sheer amount of pages that can be generated by following the links of a repository.

Most legitimate traffic seems to be either fetching profiles (a couple of my users have their profiles listed in their fediverse bios) or the root page of my forge.

2025-11-14 (all) 2025-11-15 (all) 2025-11-16 (all)
Top 1 (8532) - AS136907 (Huawei Clouds) (8537) - AS136907 (Huawei Clouds) (8535) - AS136907 (Huawei Clouds)
Top 2 (2142) - AS45899 (VNPT Corp) (2107) - AS45899 (VNPT Corp) (4002) - AS212238 (Datacamp Limited)
Top 3 (803) - AS153671 (Liasail Global Hongkong Limited) (895) - AS153671 (Liasail Global Hongkong Limited) (3504) - AS9009 (M247 Europe SRL)
Top 4 (555) - AS5065 (Bunny Communications) (765) - AS45102 (Alibaba US Technology Co., Ltd.) (3206) - AS3257 (GTT Communications)
Top 5 (390) - AS21859 (Zenlayer Inc) (629) - AS5065 (Bunny Communications) (2874) - AS45899 (VNPT Corp)

Table 4: Top ASN Per Day For The First Three Days, Per Unique IP Count

(Commands used to generate Table 4)

For this, i needed a database of IP-to-ASN data. i got one from IPInfo by registering for a free account and using their web API. i first scripted a mapping of unique IP addresses to AS number. For example, for the log file bot-git-access-2025-11-18.log.gz :

while read ip; do
    ASN=$(curl -qfL api.ipinfo.io/lite/$ip?token=<my token> | jq -r .asn);
    printf "$ip $ASN\n" | tee -a 2025-11-18-bot.ips.txt;
done < <(zcat bot-git-access-2025-11-18.log.gz | awk '{ print $1 }' | sort | uniq)

Then, with this map, i run:

cat 2025-11-18-bot.ips.txt | cut -d' ' -f 2 | sort | uniq -c | sort -n | tail -n 5

So my largest hits are from Huawei Clouds (VPS provider), VPNT (a Vietnamese mobile and home ISP), Liasail Global HK Limited (a VPS/"AI-powering service" provider), Bunny Communications LLC (a broadband ISP for residential users), and Zenlayer (CDN/Cloud infrastructure provider). When i lifted all protections, Datacamp Limited (a VPS provider), GTT Communications (some sort of bullshit-looking ISP 4 who, i have been informed, is in fact a backbone operator), and M247 Europe SRL (a hosting provider) suddenly appeared. If memory serves me right, Datacamp, GTT and M247 were also companies i had flagged during my initial investigation in summer 2024, and added to the manually blocked/limited IPs alongside all of Huawei Cloud and Alibaba.

Interestingly, both Liasail and Zenlayer mention that they "Power AI" on their front page. They sure do. Worryingly, VNPT and Bunny Communications are home/mobile ISPs . i cannot ascertain for sure that their IPs are from domestic users, but it seems worrisome that these are among the top scraping sources once you remove the most obviously malicious actors.

# The Protection Measures

i have one goal, and one constraint. My goal is that i need to protect the forge as much as possible, by means of either blocking bots or offloading the cost to my VPS provider (whose electricity i do not pay for). My only constraint: i was not going to deploy a proof-of-work-based captcha system such as Anubis . There are two reasons for these constraints:

  1. i personally find that forcing your visitors to have to expand more computational power to prove they're not a scraper is bad praxis. There are devices out there that legitimately want that access, but have limited computational power or features. And, yeah, there are multiple types of challenges , some of which take low-power devices into account or even those that cannot run JavaScript , but,
  2. Scrapers can easily bypass Anubis. It's not a design flaw. Anubis is harm reduction, not pixie dust.

i tried layers of solutions:

  • caching on the reverse proxy
  • Iocaine 2 with no classifiers, which generates garbage in reply to any query you send it
  • Manually redirecting IPs and rate-limiting them
  • Deploying Iocaine 3, with its classifiers (Nam-Shub-of-Enki)

## Reverse-Proxy Caching

i have a confession to make: i never realized that nginx did not cache anything by default. That realization promptly came with the other realization that caching things correctly is hard . i may, some day, write about my experience of protecting a service that posted links to itself on the fediverse, so that it wouldn't slow to a crawl for ten minutes after every post.

As for the rest of these, i will be showing my solution in nginx . You can, almost certainly, figure out a way of doing exactly the same thing with any other decent reverse proxy software.

To create a cache for my forge, i add the following line to /etc/nginx.conf :

proxy_cache_path /var/cache/nginx/forge/ levels=1:2 keys_zone=forgecache:100m;

That will create a 2-level cache called forgecache that will hold 100MB of data located at /var/cache/nginx/forge . i create the directory and make www-data its owner and group.

In /etc/nginx/sites-enabled/vcinfo-git.conf , where my git forge's site configuration sits, i have a location block that serves the whole root of the service, which i modify thusly:

location / {
    proxy_cache forgecache;
    proxy_buffering on;
    proxy_cache_valid any 1h;
    add_header X-Cached $upstream_cache_status;
    expires 1h;
    proxy_ignore_headers "Set-Cookie";
    proxy_hide_header "Set-Cookie";

    # more stuff...
}

That configuration does several things: it turns on caching and buffering at the proxy ( proxy_buffering ), telling it to use forgecache ( proxy_cache ) and keep any page valid for an hour ( proxy_cache_valid ). It also adds a cookie that will let you debug whether or not a query hit or missed the cache ( add_header ). The expires directive adds headers telling your visitor's browser that the content they cache will also expire in an hour ( expires ). Finally, the cache ignores any response header that sets a cookie ( proxy_ignore_headers , proxy_hide_header ), to attempt to remove any page that could be customized for a user once they log in.

The result? Caching was a disaster , predictably so. Caching works when the same resource is repeatedly queried, like with page assets, JavaScript, style sheets, etc. In this case, the thousands of actors querying my forge are coordinated, somehow, never (or rarely) query the same resource twice, and only download the raw HTML of the web pages.

Worse, caching messed up the display of authenticated pages. The snippets above are not enough to delineate between an authenticated session and an unauthenticated one, and it broke my forge so badly that i had to disable caching and enable the next layer early on 2025-11-17 , or i just could not use my forge.

## Rate-Limiting on the Proxy

The next layer of protection simply consisted in enabling a global rate-limit on the most-hit repositories:

limit_req_zone wholeforge zone=wholeforge:10m rate=3r/s;

server {
    // ...
	location ^~ (/alopexlemoni/GenderDysphoria.fyi|/oror/necro|/Lymkwi/linux|/vc-archival/youtube-dl-original|/reibooru/reibooru) {
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_max_temp_file_size 2048m;

		limit_req zone=wholeforge nodelay;

		proxy_pass http://<my actual upstream>/;
	}
}

This was achieved in two directives. The first one, limit_req_zone , sits outside the server {} block and defines a zone called wholeforge that stores 10MB of state data and limits to 3 requests per second.

When this was in place, however, actually accessing the Linux repository as a normal user (or any of the often-hit repositories) became a nightmare of waiting and request timeouts.

## Manually Redirecting to a Garbage Generator

Because caching was (predictably) useless, and rate-limiting was hindering me as well, i re-enabled the initial setup that was in place before my experiments: manually redirecting queries to a garbage generator (in this case, an old version of Iocaine). It's largely based on my initial setup following this tutorial in french .

For the purpose of this part, you do not have to know what Iocaine does precisely. In the next section, i will present my current and final setup, with an updated Iocaine that also includes a classifier to decide which queries are bots and which are regular users. For now, i will present the version where i manually chose who to return garbage to based on IP addresses.

As a little bonus, it will also include rate-limiting of those garbage-hungry bots.

i add a file called /etc/nginx/snippets/block_bots.conf which contains:

if ($bot_user_agent) {
    rewrite ^ /deflagration$request_uri;
}
if ($bot_ip) {
    rewrite ^ /deflagration$request_uri;
}
location /deflagration {
    limit_req zone=bots nodelay;
    proxy_set_header Host $host;
    proxy_pass <garbage upstream>;
}

This will force any query categorized as bot_user_agent or bot_ip to be routed through to a different upstrea which serves garbage. That upstream is also protected by rate-limiting on a zone called bots which is defined in the next bit of code. This snippet is actually meant to be included in your server {} block using the include directive.

i then add the following in /etc/nginx/conf.d/bots.conf :

map $http_user_agent $bot_user_agent {
    default 0;

    # from https://github.com/ai-robots-txt/ai.robots.txt/blob/main/robots.txt
    ~*amazonbot 1;
    ~*anthropic-ai  1;
    ~*applebot  1;
    ~*applebot-extended 1;
    ~*brightbot 1;
    ~*bytespider  1;
    ~*ccbot 1;
    ~*chatgpt-user  1;
    ~*claude-web  1;
    ~*claudebot 1;
    ~*cohere-ai 1;
    ~*cohere-training-data-crawler  1;
    ~*crawlspace  1;
    ~*diffbot 1;
    ~*duckassistbot 1;
    ~*facebookbot 1;
    ~*friendlycrawler 1;
    ~*google-extended 1;
    ~*googleother 1;
    ~*googleother-image 1;
    ~*googleother-video 1;
    ~*gptbot  1;
    ~*iaskspider  1;
    ~*icc-crawler 1;
    ~*imagesiftbot  1;
    ~*img2dataset 1;
    ~*isscyberriskcrawler 1;
    ~*kangaroo  1;
    ~*meta-externalagent  1;
    ~*meta-externalfetcher  1;
    ~*oai-searchbot 1;
    ~*omgili  1;
    ~*omgilibot 1;
    ~*pangubot  1;
    ~*perplexitybot 1;
    ~*petalbot  1;
    ~*scrapy  1;
    ~*semrushbot-ocob 1;
    ~*semrushbot-swa  1;
    ~*sidetrade 1;
    ~*timpibot  1;
    ~*velenpublicwebcrawler 1;
    ~*webzio-extended 1;
    ~*youbot  1;

    # Add whatever other pattern you want down here
}

geo $bot_ip {
    default 0;

    # Add your IP ranges here
}

# Rate-limiting setup for bots
limit_req_zone bots zone=bots:30m rate=1r/s;

# Return 429 (Too Many Requests) to slow them down
limit_req_status 429;

That bit of configuration does a mapping between the client IP and a variable called bot_ip , and the client's user agent and a variable called bot_user_agent . When a known pattern listed in those blocks is found, the corresponding variable is flipped to the provided value (here, 1 ). Otherwise, it stays 0 . Then, we define the rate-limiting zone that is used to slow down the bots so they don't feed on slop too fast. You will then need to install the http-geoip2 nginx module (on Debian-based distributions, something like apt install libnginx-mod-http-geoip2 will do).

Once that is done, add the following line to the server block of every site you want to protect:

include /etc/nginx/snippets/block_bots.conf;

And when you feel confident enough, roll a nginx -t and reload the unit for nginx .

Now, if you're using caddy or any other reverse proxy, there are probably similar mechanisms available. You can go and peruse the documentation of Iocaine, or look online for specific tutorials that, i am sure, other people have made better than i would.

Immediately after enabling it, and shoving all the IPs from Alibaba Cloud and Huawei Cloud in the bot config file, the activity slowed down on my server. Power usage went down to ~180W, CPU usage to rougly 60%, and it stopped making a hellish noise.

As the stats showed earlier, however, a lot of traffic was still hitting the server itself. Even weirder, there were still occasional spikes, every 3 hour, that lasted about one and a half hour, where the server would whirr and forgejo suffocate again.

Bots were still hitting my server, and there was no clear source for it.

## Automatically Classifying Bots and Poisoning Them: Iocaine and Nam-Shub-of-Enki

So far, the steps i showed so far help when a single IP is hammering at your forge, or when someone is clearly scraping you from an Autonomous System that you do not mind blocking. Sadly, as i've showed above in Table 4 , a surprising amount of scraping comes from broadband addresses. i can assemble lists of IPs as big as i want, or block entire ASNs, but i would love to have a per-query way of determining if a query looks legitimate.

The next steps of protection will rely on categorizing a source IP based on its the credibility of its user agent. This mechanism is largely based on the documentation for Iocaine 3.x . We finally get to talk about Iocaine!

Iocaine is a tool that traps scrapers in a maze of meaningless pages that endlessly lead to more meaningless pages. The content of these pages is generated using a Markov chain, based on a corpus of texts given to the software. Iocaine (specifically all versions after 3 at least 5 ) is a middleware, in the sense that it works by being placed on the line between your reverse proxy and the service. Your reverse proxy will first begin by redirecting traffic to Iocaine, and, if Iocaine deems a query legitimate, it will return a 421 Misdirected Request back at your reverse-proxy. The latter must then catch it, and use the real upstream as a fallback. If Iocaine's Nam-Shub-of-Enki 6 decides query came from a bogus or otherwise undesirable source, it will happily reply 200 OK and send generated garbage.

My setup lodges Iocaine 3 between nginx and my forge, following the Iocaine documentation to use the container version . i recommend you follow it, and then add the next little things to enable categorization statistics, and prevent the logging they're based on from blowing up your storage:

  1. In etc/config.d/03-nam-shub-of-enki.kdl , change the logging block to:
logging {
    enable #true
    classification {
        enable #true
    }
}
  1. In docker-compose.yaml , add the following bits to limit classification logging to 50MB:
services:
  iocaine:
    # The things you already have here...
    # ...
    env:
      - RUST_LOG=iocaine=info
    logging:
      driver: "json-file"
      options:
        max-size: "50m"

My checks block in Nam-Shub-of-Enki is as such:

checks {
    disable cgi-bin-trap

    asn {
        database-path "/data/ipinfo_lite.mmdb"
        asns "45102" "136907"
    }
    ai-robots-txt {
        path "/data/ai.robots.txt-robots.json"
    }
    generated-urls {
        identifiers "deflagration"
    }
    big-tech {
        enable #true
    }
    commercial_scrapers {
        enable #true
    }
}

I snatched a copy of the latest ipinfo ASN database for free and blocked AS52102 (Alibaba) and AS136907 (Huawei Clouds).

On 2025-11-18 at 00:00:29 UTC+1, i enabled Iocaine with the Nam-Shub-of-Enki classifier in front of my whole forge. Immediately, my server was no longer hammered. Power draw went down to just above 160W.

One problem i noticed however, while trying to deploy the artifact for this blog post on my forge, is that Iocaine causes issues when huge PUT / PATCH / POST requests with large bodies are piped through it: it will hang up before the objects are entirely written. i am trying to figure out a way of only redirecting HEAD and GET requests to Iocaine in nginx, like is done in the Caddy example of the Iocaine documentation.

What i ended up settling on requires a bit of variable mapping. At the start of your site configuration, before the server {} block:

map $request_method $upstream_location {
	GET	<iocaine upstream>;
	HEAD	<iocaine upstream>;
	default	<your actual upstream>;
}

map $request_method $upstream_log {
	GET	bot_access;
	HEAD	bot_access;
	default	access;
}

Then, in the block that does the default location, write:

	location / {
	    proxy_cache off;
	    access_log /var/log/nginx/$upstream_log.log combined;
	    proxy_intercept_errors on;
	    error_page 421 = @fallback;
	    proxy_set_header Host $host;
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_pass http://$upstream_location;
	}

That is, replace the upstream in proxy_pass with the upstream decided by the variable mapping, and, while we're at it, use $upstream_log to know which log will be the final one for that request. i differentiate between bot_access.log and access.log to gather my statistics, so the difference matters to me. Change the variables to suit the way you do it (or remove it, if you don't distinguish clients in your log files).

# Monitoring Iocaine

Currently, on 2025-11-30 at 16:33:00 UTC+1, Iocaine has served 38.16GB of garbage. Over the past hour, 152.11MB of such data was thrown back at undesirable visitors. 3.39GB over the past day, 22.22GB over the past week. You can get the snippet that describes my Iocaine-specific Grafana views here .

The vast majority of undesirable queries come from Claude, OpenAI, and Disguised Bots. Claude and OpenAI are absolutely gluttonous, and, once they have access to a ton of pages, they will greedily flock to fetch them like pigeons being fed breadcrumbs laced with strychnine.

Hits by Ruleset on my Grafana

AI bot scrapers ( ai.robots.txt ) maintain a constant 920~930 query per minute (15-ish QPS) over the 6 domains i have protected with Iocaine, including the forge.

There is also a low hum of a mix of commercial scrapers (~1 request every two second), big tech crawlers (Facebook, Google, etc, about 2QPS or 110 query/min), and, especially, fake browsers.

Classifying fake browsers is where Iocaine really shines, specifically thanks to the classifiers implemented via Nam-Shub-of-Enki. The faked bots classifier detects the likelihood that the user agent reported by the client is bullshit, generated from a list of technologies mashed together. For example, if your client reports a user agent for a set of software that never supported HTTP2, or never actually existed together, or is not even released yet , it will get flagged. Think, for example, Windows NT 4 running Chrome, pretending to be able to do TLS1.3.

The background-noise level of such queries is usually 140~160 queries per minute (or 2~3 QPS). However, notice those spikes in the graph above?

## The Salves of Queries

For a while during my experiments i noticed those pillars of queries. My general nginx statistics would show a sharp increase of connections, with an iniital ramp-up, and a stable-ish plateau lasting about one and a half hour, before suddenly stopping. It would then repeat again, roughly three hours later.

Between October 29th and November 19th, and on November 28th, these spikes would constantly show up. As soon as i got Iocaine statistics running, it would flag all of those queries as faked browsers.

i investigated those spikes in particular, because they baffled and scared me: the regularity with which they probed me, and the sharpness of the ramp-up and halts, made me afraid that someone, somewhere, was organizing thousands of IPs to specifically take turns at probing websites. i have not reached any solid conclusions, beyond the following:

  • The initial phase of an attack wave begins with a clear exponential ramp-up
  • The ramp-up stops when the server starts either throwing errors, or the response latency reaches a given threshold
  • Every wave of attack lasts roughly one hour and a half
  • An individual IP will often contribute no more than one query, but it can reach 50 to 60 queries per IP
  • The same 15 or so ASN keep showing up, with five regular leaders in IP count:
    1. AS212238: Datacamp Limited
    2. AS3257: GTT Communications
    3. AS9009: M247 Europe SRL
    4. AS203020: HostRoyale Technologies Pvt Ltd
    5. AS210906: UAB "Bite Lietuva" (a Lithuanian ISP)

All of those as service providers. My working theory at the moment is that someone registered thousands of cheap servers in many different companies, and are selling access to them as web proxies for scraping and scanning. i will probably write something up later when i have properly investigated that specific phenomenon.

# Conclusion

Self-hosting anything that is deemed "content" openly on the web in 2025 is a battle of attrition between you and forces who are able to buy tens of thousands of proxies to ruin your service for data they can resell.

This is depressing. Profoundly depressing. i look at the statistics board for my reverse-proxy and i never see less than 96.7% of requests classified as bots at any given moment. The web is filled with crap, bots that pretend to be real people to flood you. All of that because i want to have my little corner of the internet where i put my silly little code for other people to see.

i have to learn to protect myself from industrial actors in order to put anything online, because anything a person makes is valuable, and that value will be sucked dry by every tech giant to be emulsified, liquified, strained, and ultimately inexorably joined in an unholy mesh of learning weights.

This experience has rather profoundly radicalized the way i think about technology. Sanitized content can be chewed on and shat out by companies from training, but their AI tools will never swear. They will never use a slur. They will never have a revolutionary thought. Despite being amalgamation of shit rolled up in the guts of the dying capitalist society, they are sanitized to hell and beyond.

The developer of Iocaine put it best when explaining why Iocaine has absolutely unhinged identifiers (such as SexDungeon , PipeBomb , etc) is that they will all trigger "safeguard" mechanisms in commercial AI tools: absolutely no coding agent will accept analyzing and explaining code where the memory allocator's free function is called liberate_palestine . i bet that if i described, in graphic details, in the comments of this page, the different ways being a furry intersects with my sexuality, that no commercial scraper would even dare ingest this page.

Fuck tech companies. Fuck "AI". Fuck the corporate web.

2026: A Year of Reckoning

Portside
portside.org
2025-12-12 05:43:18
2026: A Year of Reckoning jay Fri, 12/12/2025 - 00:43 ...
Original Article

Millions demonstrated. Cities mobilized in defense of their people. Judges and juries upheld the law. Voters laid the basis for revoking the 2024 balance of power and issuing a new mandate for progressive change.

We have the power to make 2026 a year of reckoning, of decisive defeats for the MAGA movement. We believe that a revitalized Left, with its vision of a multiracial democratic and working-class movement, is key to ousting the MAGA crowd at every level of government in every region of the country.

This is a time for incisive analysis and bold initiatives, for strategizing and organizing for real change. For devising new tactics and thinking big about what can be achieved. We at Portside will be working to provide you and other readers the best strategic thinking and analysis we can find from a multitude of sources. We will continue to reflect the struggles, in our country and globally, for peace, security and justice. Once a year we ask you to help us do that.

Support This Vision

This year showed what it looks like for people to make their own history.

New York voters generated a political thunderclap by electing a democratic socialist mayor. California answered Trump’s gerrymander. Chicago gave new meaning to whistleblowing and Portland launched the Frog Brigade. Each such creative act inspires new actions.

By these actions and many more, people punctured the facade of racist and reactionary omnipotence and created a new political reality. We believe that is a signal of what is to come. We look forward to many more reckonings in 2026.

Every day we search the Internet for examples of people making history, including frontline reporting, cogent argument, culture and humor. We look for and share insights from science. Every day, we share the best that we find with you.

To receive a short daily update of these materials, subscribe to Portside Snapshot .

As you probably know, we moderators of Portside work on an entirely volunteer basis. We’re rewarded by the readers who put the information we provide to use to secure a better future, to advance toward a qualitatively more just society.

We pledge to keep doing what we've been doing. We ask you to help us by donating to keep our servers running and our website current.

Support This Vision

We are delighted that in the last year visits to the Portside website tripled. More people are recommending material and more authors are submitting their writings for consideration. We are dedicated to serving as your eyes and ears in the digital universe. Keep sending your input to either portside@portside.org or reader comments .

Please contribute to keep this project going. We promise to make every donation go a long way toward the future we seek together. We don’t ask our readers for financial support often. If you want to be a part of this project and to keep it going strong, this is the time to support Portside.

Yours in struggle,

The entire Portside crew

Judy Atkins, Jonathan Bennett, Mark Brody, Barry Cohen, David Cohen, Ira Cohen, Jeannette Ferrary, Marti Garza, Greg Heires, Geoffrey Jacques, Will Jones, Maureen LaMar, Stephanie Luce, Ray Markey, John P. Pittman, Natalie Reuss, Lee Rossi, Nan Rubin, Meredith Schafer, Jay Schaffner, Kurt Stand, Ethan Young

Checks should be made payable to PORTSIDE and sent to:

Portside
355 Eighth Avenue #1J
New York, NY 10001-4839

Will the Court Rule for Trump or for Wall Street?

Portside
portside.org
2025-12-12 05:23:11
Will the Court Rule for Trump or for Wall Street? jay Fri, 12/12/2025 - 00:23 ...
Original Article

Last Monday, the Court held oral arguments in a case concerning Trump’s firing of a member of the Federal Trade Commission for no reason other than his desire to install his own person in her place. For the Court to rule in Trump’s favor, it would have to overturn the 90-year-old Court decision in Humphrey’s Executor , which held that Congress had established regulatory commissions with fixed terms for commissioners that didn’t coincide with presidential terms, so that their rulings would be insulated from direct political pressures and thus better serve, in a disinterested way, the public’s interests. In its 1935 decision, the Court ruled that President Franklin Roosevelt couldn’t fire a commission member without establishing that member’s misconduct, and that has remained the law until now.

In its preliminary rulings on commission members fired by Trump since he began his second term, however, the Court upheld those firings and hinted that when a full-blown case came before it, as it did on Monday, it would strike down Humphrey’s Executor . That would comport with the Republican justices’ pattern of generally vesting more power in the presidency and specifically letting Trump run amok, as it did in its ruling last year holding that a president could not be held accountable for any actions he (or she) took in an official capacity, save by congressional impeachment. That went against the words inscribed on the Court itself—“Equal Justice Under Law”—but the Court’s push for a unitary executive, which under Trump has come close to meaning autocratic rule, clearly mattered more than that “equal justice” nonsense, and certainly more than all those federal laws, dating back to the late 19th century, establishing commissions to protect small businesses, workers, consumers, and people who breathe air and drink water.

However, in the course of Monday’s session, Justice Brett Kavanaugh raised an issue that doubtless troubled his fellow Republicans: If they struck down Humphrey’s Executor , how could they preserve the one institution that works in the interest of America’s banks, Wall Street, and generally, the rich: the Federal Reserve? But for the brief interlude when liberals, under the leadership of Chief Justice Earl Warren, dominated the Court, that body has always been the defender of big business interests. In that sense, the most paradigmatic decisions the Court has ever rendered were those in which it extended the rights that citizens had won under the 14th Amendment to the corporations and cartels that arose after the Civil War, while denying those rights to the very Black Americans that the 14th Amendment was written to protect.

It’s the Fed that holds ready cash on tap for big banks if they get in trouble, that can regulate those banks or block regulations on them, and that banks, corporations, bondholders, and the rich count on to deter inflation (which can devalue their holdings), chiefly by raising interest rates and unemployment rates. No governmental body has done more to block full employment, thereby squelching the interests of working-class Americans while enhancing the income and wealth of the wealthy. There have been times when even Republican presidents, anxious about the public’s view of their economic stewardship, have wanted the Fed to lower interest rates—Nixon is on that list, and Trump most surely is as well—but a staunch independent Fed, free from presidential tinkering thanks to Humphrey’s Executor , can defend the rich even from them.

On Monday, Kavanaugh asked the attorney from Trump’s Justice Department, who was arguing in favor of striking Humphrey’s down, to find a way that the Court could strike down regulatory commissions but leave the Fed intact. The best that attorney—Solicitor General D. John Sauer—could do was answer that firing the Fed’s board members would “raise their own set of unique distinct issues.” What those issues might be, other than unique and distinct, he declined to say.

Let it not be said that in this moment of judicial uncertainty I was reluctant to help out. So let me suggest some ways in which the Fed and its governors are unique and distinct from other regulatory boards and their commissioners. The Federal Trade Commission protects small businesses and consumers from oligopolistic pricing and other abuses of big businesses. The Securities and Exchange Commission protects investors and the public from scams and financial manipulation. The National Labor Relations Board oversees workers’ rights to collective bargaining. The Consumer Product Safety Commission investigates new products and can publicize and, if need be, ban the sale of those that are unsafe. The Consumer Financial Protection Bureau protects bank depositors and the public generally against the exploitative practices of financial institutions. These, and a host of other such regulatory bodies, may see their powers effectively diminished by hostile administrations or by court rulings in favor of the businesses they regulate, but there’s no question that Congress established them to protect, in the broadest sense, the public interest.

The mission of the Fed, by contrast, has always been to protect the public interest as primarily defined as protecting capital over labor . To be sure, in its ability to arrest financial panics, the Fed can serve, and has served, the interest of the nation at large. But it can also elevate, and has elevated, the interests of the rich over everyone else. In the early 1980s, under the leadership of Paul Volcker, the Fed raised interest rates to the point that people were no longer buying cars. That brought down the inflation that was reducing the value of the bonds in which the rich had invested, but it also caused the closure of thousands of factories, beginning the hollowing out of American manufacturing and turning our industrial states into the Rust Belt. Just as the subsequent offshoring of industry had the effect of reducing Americans’ income, so did the Fed’s long-term war against a full-employment economy, in which workers can gain the power to bargain for higher wages.

When F. Scott Fitzgerald once observed that “the rich are different from you and me,” his friend Ernest Hemingway replied, “Yes, they have more money.” That is what differentiates the Fed’s clientele, as it were, from the clientele of the other regulatory agencies. It may present a challenge to the six Court Republicans to create a doctrine that propounds a legal basis for that distinction, and some of them may be so Trump-smitten that they’re willing to give over the Fed to Trump’s every whim anyway. Still, Wall Street is counting on them to keep the Fed beholden to Wall Street rather than any president.

Hence, the Court’s conundrum. And whichever way they rule—in favor of even more expansive and unchallengeable presidential power, or in favor of keeping the Fed as is—we lose.

More by Harold Meyerson ]

Read the original article at Prospect.org .

Used with the permission. © The American Prospect , Prospect.org, 2025 . All rights reserved.

Support the American Prospect .

Click here to support the Prospect's brand of independent impact journalism.

Asking the Right Questions When Lawless US Officials Go on a Murder Spree

Portside
portside.org
2025-12-12 04:45:47
Asking the Right Questions When Lawless US Officials Go on a Murder Spree jay Thu, 12/11/2025 - 23:45 ...
Original Article

It’s all good that members of Congress, mainstream headlines, and the various talking heads are discussing something we don’t hear about often enough—that the US is committing war crimes ; how we need a new War Powers Resolution; or why heads should roll at the Pentagon over a series of boat bombings in the Pacific and Caribbean in recent months that are nothing less than premeditated murder.

It is important that—after years of the Pentagon using unchecked power to carry out violence all over the world—Congress is finally asking questions. The debate is rising over whether the Sept. 2 murder of two men who survived an initial US attack on their small boat in international waters of the Caribbean, only to be killed by a second US bombing designed specifically to eliminate them, constitutes a war crime.

But there are three huge problems this rising debate mostly ignores. First, some of these actions, like a president declaring unilaterally that a war exists (when it doesn’t) and then bombing unarmed civilian boats in international waters, are inherently illegal, regardless of whether anyone is killed by the first, second, or tenth bomb. That’s an act of piracy and murder, not a legal use of armed force. Second, too few of the discussions of illegal US military actions take the issue of accountability seriously. How far up—and down—the chain of command does responsibility go? And third, congressional considerations of legality, accountability, and War Powers resolutions much too often are limited to situations in which US troops might be put in harm’s way. The actual deaths of civilians who happen to be Venezuelan or Yemeni or Iranian are simply not part of the calculus.

So, yes, of course, the secondary bombing of shipwrecked sailors on Sept. 2 was illegal and should be seen for what it was: murder in cold blood. That is because deliberately targeting and killing anyone at sea whose boat is sinking and who is waving for help, is a crime. But it must be said—loudly and repeatedly—that all the kill-them-all-with-one-bomb strikes on boats in the Caribbean and the eastern Pacific that the US has carried out since September were also illegal. As of December 6, US forces have killed at least 87 people—Venezuelan, Colombian, Trinidadian, and Ecuadoran victims—with US drones and bombs in the region. None of the people or boats are alleged to have been armed or to have been transferring weapons.

After September 11, 2001, the US declared a set of open-ended wars and military operations, in which Congress largely voluntarily surrendered its power to declare war and responsibility to provide oversight of wars through funding decisions.

A presidential pronouncement that unidentified people allegedly committing a crime are now to be considered “narco-terrorists” (without any evidence or legal definition) and that alleged drug smugglers are now to be treated as “combatants” in an “armed conflict” (which does not exist), does not make that claim true.

It’s also a classic “even if” situation. Even if we knew who was on the boats, and even if they were actually smuggling drugs, and even if there was a real war underway, no law, neither US nor international, allows for the extra-judicial killing of someone involved in a crime who does not present an immediate threat of armed attack. Such an act is murder, pure and simple, not a legal military act. Former President of the Philippines Rodrigo Duterte is currently in prison for exactly such acts: he remains in detention at the International Criminal Court in The Hague, charged with crimes against humanity for his involvement in the killings of thousands of people alleged to be drug dealers, killed by police and the military during his anti-drug campaigns.

This series of killings from above does not constitute an act of war, but rather a set of extrajudicial murders carried out on the orders of the US government’s highest officials, namely President Donald Trump and Secretary of Defense Pete Hegseth.

As part of a lawsuit filed this week by the ACLU, Center for Constitutional Rights, and New York Civil Liberties Union aimed at forcing the Trump White House to release an internal Office of Legal Council opinion being used to justify the strikes, Jeffrey Stein, a staff attorney with the ACLU’s National Security Project, said, “The public deserves to know how our government is justifying the cold-blooded murder of civilians as lawful and why it believes it can hand out get-out-of-jail-free cards to people committing these crimes.”

The Trump administration , Stein argued, “must stop these illegal and immoral strikes, and officials who have carried them out must be held accountable.” That is exactly right.

And so yes, it’s good that outraged opposition is rising against the deliberate murders of the two survivors seen clinging to the sinking boat in the Caribbean. But it’s vitally important that the opposition, including lawmakers in Congress who are the frontlines of holding Trump and Hegseth to account, also challenge the illegality of every such bombing by the Pentagon. And that doesn’t seem to be happening. As the New York Times put it, “the idea that something was bad about that particular strike implicitly suggests the first one on that boat—and all the other attacks on other boats—was fine. And the exercise reinforced the premise that the situation should be thought about through the lens of an armed conflict.” Which does not exist.

As many rights experts have pointed out, there is no “war crime” when there is no war. These murders must be seen for what they are.

When the talk in Congress turns to whether these military operations should trigger a new War Powers Act resolution to rein in the out-of-control military, the debate generally focuses on whether these attacks count as “hostilities,” which would trigger the Act. The word isn’t defined in the Act, but US presidents have generally defined “hostilities” very narrowly, as something that puts US troops at risk. And too rarely has Congress challenged that position. The result is that Venezuelan and Colombian and Ecuadoran and other fisherfolk, other civilians, maybe some small-scale criminals, can all be put at risk without consequence. They are being killed without anyone having to state publicly who they are, what was in their boats, where they were heading, even their names—let alone how we know any of that–and apparently their lives have no significance on the legality or illegality of their murder.

Deliberately targeting and killing anyone at sea whose boat is sinking and who is waving for help, is a crime.

None of this should be surprising. Three weeks after the September 2 boat attack, Secretary of Defense Pete Hegseth addressed almost 1000 of the military’s top brass summoned from all over the world to hear his talk. He said, “We also don’t fight with stupid rules of engagement. We untie the hands of our warfighters to intimidate, demoralize, hunt, and kill the enemies of our country. No more politically correct and overbearing rules of engagement, just common sense, maximum lethality, and authority for warfighters.”

“Maximum lethality”—without even a modicum of justice–is what we’re seeing in all the boat attacks, not only the second strike that has generated the most concern. “Authority for warfighters” appears to be plentiful in the form of zero accountability for top commanders or ordinary troops of the anti-speedboat campaign. And that authority is visible as well in the targeting and threats against the six congressmembers, all former military and intelligence officers, who dared remind US troops that their oaths allow, indeed require them to refuse to carry out illegal orders.

Of course, the boat bombings are not the only recent examples of crimes by the military carried out without consequence.

A renewed level of outrage erupted recently when a new Pentagon investigation found that the secretary of defense had indeed put US troops in harm’s way during a seemingly casual, emoji-filled Signal chat involving top military and intelligence officials, in which he announced and discussed secret operational details of imminent US attacks on alleged Houthi targets in Yemen . Signalgate was a huge story back in March , when the news first broke that several unauthorized people were on the unsecure chat, including, apparently by mistake, the editor-in-chief of The Atlantic magazine.

Certainly, some level of alarm was justified because US troops were indeed put at risk, because normal security regulations were ignored, because the whole episode reeked of carelessness, and because military rules were violated with apparent impunity. But no one seemed concerned that the real outrage of this scheme was not the unauthorized Signal chat, but that the US military was off doing the job Hegseth proudly reminded them of in Quantico–“killing people and breaking things.” As it turned out, they were killing innocent Yemeni civilians and destroying Yemeni civilian infrastructure .

That time, it may have been a war crime, although the US was not officially at war with Yemen. There were a number of countries involved in military action in the Red Sea, but the US military forces there were deployed with no regard for international law or for the US Constitution , which says only Congress can declare war. It is exactly during wartime conditions that the Geneva Conventions and other laws of war apply–prohibiting collective punishment, attacks on civilians, failure to distinguish between military and civilian targets. For Hegseth , describing it during the Signal chat, the concern was not the illegality of the attacks, but how to justify them to the public: “Nobody knows who the Houthis are—which is why we would need to stay focused on 1) Biden failed and 2) Iran funded.” It was all just a partisan PR job. Now, of course, with the release of the latest Pentagon report this week, Signalgate outrage is returning—but the focus is still on how the unsecure call put US troops at risk—not on how many Yemeni civilians, how many children and elders, were killed and injured—a clear war crime.

Then there was Iran. In June, Israel launched 12 days of bombing raids against Iran, destroying military, energy, government, media , and residential targets. Iran’s government reported 935 people killed , including 38 children; almost all of Iran’s retaliatory missiles were intercepted by Israeli and US anti-missile systems, 28 people were killed in Israel. The US made no effort to criticize Israel’s use of US weapons in the initial assault, in clear violation of US laws restricting use of weapons it provided. Instead, the US joined Israel and launched its own attack targeting Iran’s nuclear facilities with 30,000-pound “bunker-buster” bombs, the largest non-nuclear bombs in the US arsenal. Trump immediately announced the US strikes had “completely and totally obliterated” Iran’s nuclear facilities. The claim was denied within hours by his own Chairman of the Joint Chiefs of Staff, Gen. Dan Caine, and within a couple of days by the Defense Intelligence Agency’s finding that the strikes had only set back Iran’s nuclear program three to six months.

We need to demand an end to all unchecked illegal US military action...

The public debate then focused almost entirely on which damage assessments were more likely correct—did the US strikes actually “obliterate” the equipment or did it only partly damage it? Was Iran’s nuclear capacity destroyed or only set back? Few analysts and virtually no one in Congress interrogated the illegality of both the Israeli and the US strikes to begin with. Violations of the Geneva Conventions’ prohibitions on collective punishment and failure to distinguish between military and civilian targets, specific international legal bans on targeting nuclear facilities—are all separate war crimes. But the US was not at war with Iran, it was Washington , not Tehran that withdrew from the JCPOA, the Iran nuclear deal, back in 2018. So the US attack on Iran was illegal, and would have been illegal even if none of the Geneva Convention’s laws of war were violated. Washington’s actions were probably not war crimes, they may have been crimes against humanity, they may have constituted the crime of aggression. But few policymakers or pundits were even considering those realities.

After September 11, 2001, the US declared a set of open-ended wars and military operations, in which Congress largely voluntarily surrendered its power to declare war and responsibility to provide oversight of wars through funding decisions. The US government and military became used to killing, wounding, and destroying without even explaining, let alone answering for their actions. Pete Hegseth deployed as an officer in that era. He commanded troops in two places where US forces notoriously violated the law and dehumanized people: the infamous US prison at Guantanamo Bay, and Iraq. Now he is in charge of the entire Pentagon. The violations that we are witnessing in the Caribbean and elsewhere today were years in the making. We cannot be satisfied with a Congressional challenge only on the most narrow and technical terms. We need to demand an end to all unchecked illegal US military action—and full accountability for the damage that it has already done.

[ Phyllis Bennis is a fellow of the Institute for Policy Studies and serves on the national board of Jewish Voice for Peace. Her most recent book is "Understanding Palestine and Israel" (2025). Her other books include: "Understanding the US-Iran Crisis: A Primer" (2008) and "Challenging Empire: How People, Governments, and the UN Defy US Power" (2005). ]

[ Khury Petersen-Smith is the Michael Ratner Middle East Fellow and co-director of the New Internationalism Project at the Institute for Policy Studies. ]

Licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.

How the Elite Behave When No One Is Watching: Inside the Epstein Emails

Portside
portside.org
2025-12-12 04:21:46
How the Elite Behave When No One Is Watching: Inside the Epstein Emails jay Thu, 12/11/2025 - 23:21 ...
Original Article

A close read of the thousands of messages makes it less surprising. When Jeffrey Epstein, a financier turned convicted sex offender, needed friends to rehabilitate him, he knew where to turn: a power elite practiced at disregarding pain.

At the dark heart of this story is a sex criminal and his victims — and his enmeshment with President Trump. But it is also a tale about a powerful social network in which some, depending on what they knew, were perhaps able to look away because they had learned to look away from so much other abuse and suffering: the financial meltdowns some in the network helped trigger, the misbegotten wars some in the network pushed, the overdose crisis some of them enabled, the monopolies they defended, the inequality they turbocharged, the housing crisis they milked, the technologies they failed to protect people against.

The Epstein story is resonating with a broader swath of the public than most stories now do, and some in the establishment worry. When Representative Ro Khanna, Democrat of California, speaks of an “Epstein class,” isn’t that dangerous? Isn’t that class warfare?

But the intuitions of the public are right. People are right to sense that, as the emails lay bare, there is a highly private merito-aristocracy at the intersection of government and business, lobbying, philanthropy, start-ups, academia, science, high finance and media that all too often takes care of its own more than the common good. They are right to resent that there are infinite second chances for members of this group even as so many Americans are deprived of first chances. They are right that their pleas often go unheard, whether they are being evicted, gouged, foreclosed on, A.I.-obsolesced — or, yes, raped.

It is no accident that this was the social milieu that took Mr. Epstein in. His reinvention, after he pleaded guilty to prostitution-related charges in Florida in 2008 , would never have been possible without this often anti-democratic, self-congratulatory elite, which, even when it didn’t traffic people, took the world for a ride.

The emails, in my view, together sketch a devastating epistolary portrait of how our social order functions, and for whom. Saying that isn’t extreme. The way this elite operates is.

The idea of an Epstein class is helpful because one can be misled by the range of people to whom Mr. Epstein ingratiated himself. Republicans. Democrats. Businesspeople. Diplomats. Philanthropists. Healers. Professors . Royals. Superlawyers. A person he emailed at one moment was often at war with the ideas of another correspondent — a Lawrence Summers to a Steve Bannon, a Deepak Chopra to a scientist skeptical of all spirituality, a Peter Thiel to a Noam Chomsky. This diversity masked a deeper solidarity.

What his correspondents tended to share was membership in a distinctly modern elite: a ruling class in which 40,000-foot nomadism, world citizenship and having just landed back from Dubai lend the glow that deep roots once provided; in which academic intellect is prized the way pedigree once was; in which ancient caste boundaries have melted to allow rotation among, or simultaneous pursuit of, governing, profiting, thinking and giving back. Some members, like Mr. Summers , are embedded in all aspects of it; others, less so.

If this neoliberal-era power elite remains poorly understood, it may be because it is not just a financial elite or an educated elite, a noblesse-oblige elite, a political elite or a narrative-making elite; it straddles all of these, lucratively and persuaded of its own good intentions. If it’s a jet set, it’s a carbon-offset-private-jet set. After all, flying commercial won’t get you from your Davos breakfast on empowering African girls with credit cards to your crypto-for-good dinner in Aspen.

Many of the Epstein emails begin with a seemingly banal rite that, the more I read, took on greater meaning: the whereabouts update and inquiry. In the Epstein class, emails often begin and end with pings of echolocation. “Just got to New York — love to meet, brainstorm,” the banker Robert Kuhn wrote to Mr. Epstein. “i’m in wed, fri. edelman?” Mr. Epstein wrote to the billionaire Thomas Pritzker (it is unclear if he meant a person, corporation or convening). To Lawrence Krauss, a physicist in Arizona: “noam is going to tucson on the 7th. will you be around.” Mr. Chopra wrote to say he would be in New York, first speaking, then going “for silence.” Gino Yu, a game developer, announced travel plans involving Tulum, Davos and the D.L.D. (Digital Life Design) conference — an Epstein-class hat trick.

Landings and takeoffs, comings and goings, speaking engagements and silent retreats — members of this group relentlessly track one another’s passages through JFK, LHR, NRT and airports you’ve never even heard of. Whereabouts are the pheromones of this elite. They occasion the connection-making and information barter that are its lifeblood. If “Have you eaten?” was a traditional Chinese greeting, “Where are you today?” is the Epstein-class query.

Their loyalty, it appears, is less downward to people and communities than horizontal to fellow members of their borderless network. Back in 2016, Theresa May, then the prime minister of Britain, seemed to capture their essence: “If you believe you are a citizen of the world, you are a citizen of nowhere.” Mr. Epstein’s correspondents come alive far from home, freed from obligations, in the air, ready to connect.

And the payoff can be real. Maintain, as Mr. Epstein did, a grandmother-like radar of what a thousand people are doing tomorrow and where, and you can introduce a correspondent needing a lending partner to someone you’re seeing today. Or let Ehud Barak know a Rothschild has the flu. Or offer someone else a jet ride back to New York and reward the journalist who tipped you off by setting him up to meet a Saudi royal.

But the whereabouts missive is just the first flush of connection. Motion is the flirtation; actual information, the consummation.

How did Mr. Epstein manage to pull so many strangers close? The emails reveal a barter economy of nonpublic information that was a big draw. This is not a world where you bring a bottle of wine to dinner and that’s it. You bring what financiers call “edge” — proprietary insight, inside information, a unique takeaway from a conference, a counterintuitive prediction about A.I., a snippet of conversation with a lawmaker, a foretaste of tomorrow’s news.

What the Epstein class understands is that the more accessible information becomes, the more precious nonpublic information is. The more everybody insta-broadcasts opinions, the dearer is the closely held take. The emails are a private, bilateral social media for people who can’t or won’t post: an archipelago of single-subscriber Substacks. And in the need to maintain relevance by offering edge, a reader detects thirst and swagger, desperateness and swanning.

“Saw Matt C with DJT at golf tournament I know why he was there,” Nicholas Ribis, a former Trump Hotel executive, wrote to Mr. Epstein, making what couples therapists call a bid for attention. Jes Staley, then a top banking executive, casually mentioned a dinner with George Tenet, the former Central Intelligence Agency director, and got the reaction he probably hoped for: “how was tenet.” Mr. Summers laid bait by mentioning meetings with people at SoftBank and Saudi Arabia’s sovereign wealth fund. Mr. Epstein nibbled: “anyone stand out?” Then Mr. Summers could offer proprietary intel. On it went: What are people saying? Who are you hearing for F.B.I. director? Should I drop your name to Bill Clinton?

Sometimes these people give the impression that their minds would be blown by a newspaper. Mr. Kuhn wrote to Mr. Epstein: “Love to get your sense of Trump’s administration, policies.” And while it may seem strange to rely on Mr. Epstein for political analysis when you can visit any number of websites, for this class, insight’s value varies inversely with the number of recipients. And the ultimate flex is getting insider intel and shrugging: “Nthg revolutionary really,” the French banker Ariane de Rothschild wrote during a meeting with Portugal’s prime minister.

Nomadic bat signals get things going, and edge keeps them flowing, while underneath a deeper exchange is at work. The smart need money; the rich want to seem smart; the staid seek adjacency to what Mr. Summers called “life among the lucrative and louche”; and Mr. Epstein needed to wash his name using blue-chip people who could be forgiving about infractions against the less powerful. Each has some form of capital and seeks to trade. The business is laundering capital — money into prestige, prestige into fun, fun into intel, intel into money.

Mr. Summers wrote to Mr. Epstein: “U r wall st tough guy w intellectual curiosity.” Mr. Epstein replied: “And you an interllectual with a Wall Street curiosity.”

In another email, Mr. Epstein offered typo-strewn and false musings on climate science to Mr. Krauss, including that Canada perhaps favored global warming, since it’s cold (it doesn’t), and that the South Pole is actually getting colder (it’s melting rapidly). Mr. Krauss let Mr. Epstein indulge in his rich-man theorizing while offering a tactful correction and a hint that more research funding would help.

For this modern elite, seeming smart is what inheriting land used to be: a guarantor of opened doors. A shared hyperlink can’t stand alone; your unique spin must be applied. Mr. Krauss sends his New Yorker article on militant atheism; Mr. Chomsky sends a multiparagraph reply; Mr. Epstein dashes off: “I think religion plays a major positive role in many lives. . i dont like fanaticism on either side. . sorry.” This somehow leads to a suggestion that Mr. Krauss bring the actor Johnny Depp to Mr. Epstein’s private island.

Again and again, scholarly types lower themselves to offer previews of their research or inquiries into Mr. Epstein’s “ideas.” “Maybe climate change is a good way of dealing with overpopulation,” muses Joscha Bach, a German cognitive scientist.

The nature of this omnidirectional capital exchange comes into special focus in the triangle of emails among Mr. Epstein, Mr. Summers and his wife, Elisa New. Mr. Summers seemingly benefited from Mr. Epstein’s hosting, tip-offs, semi-insight into Trumpworld and, most grossly, dating advice many years into his marriage.

Ms. New sought Mr. Epstein’s help contacting Woody Allen and revising her emails to invite people on her televised poetry show. Mr. Epstein tutored her in elite mores and motives: Don’t say, Come on my show ; say, Join Serena Williams, Bill Clinton and Shaq in coming on my show . Mr. Epstein reaped the benefits of smarts by association in hanging around them, of the reputation cleanse of affiliation with Harvard professors and a former Treasury secretary, and of getting to cosplay as statesman, once sending an unsolicited intro email to Mr. Summers and a Senegalese politician, Karim Wade, who, Mr. Epstein informed Mr. Summers, is “the most charismatic and rational of all the africans and has there respect.” There are 1.5 billion people and 54 countries in Africa.

This class has its status games. One is, when getting a tip, to block the blessing by saying you already know. Another is to apologize for busyness by invoking centrality — “trump related issues occupying my time.” When an intro is offered, the coldest reply is “no.” The ultimate power move is from Mohamed Waheed Hassan of the Maldives, whose emails ended: “Sent from President’s iPad.”

If you were an alien landing on Earth and the first thing you saw was the Epstein emails, you could gauge status by spelling, grammar, punctuation. Usage is inversely related to power in this network. The earnest scientists and scholars type neatly. The wealthy and powerful reply tersely, with misspellings, erratic spacing, stray commas.

The status games belie a truth, though: These people are on the same team. On air, they might clash. They promote opposite policies. Some in the network profess anguish over what others in the network are doing. But the emails depict a group whose highest commitment is to their own permanence in the class that decides things. When principles conflict with staying in the network, the network wins.

Mr. Epstein may despise what Mr. Trump is doing, but he still hangs with Steve Bannon, the Trump whisperer and attack dog, seeking help on crypto regulation. Michael Wolff is a journalist, but that doesn’t stop him from advising Mr. Epstein on his public image. Kenneth Starr, who once doggedly pursued sexual misconduct allegations against Mr. Clinton, reinvented himself as a defender of Mr. Epstein. These are permanent survivors who will profit when things are going this way and then profit again when they turn.

“What team are you pulling for?” Linda Stone, a retired Microsoft executive, asked Mr. Epstein just before the 2016 election.

“none,” he replied.

In one email, he commiserates with Mr. Wolff about Mr. Bannon’s rhetoric; in another, he invites Mr. Bannon over and suggests an additional guest — Kathryn Ruemmler, who served as President Barack Obama’s White House counsel.

His exchanges with Ms. Ruemmler are especially striking — not for the level of horridness, but for how they portray this network at its most shape-shiftingly self-preservational, and most indifferent to the human beings below.

Like so many, she had gone from Obama-era public service to private legal practice, eventually becoming the chief lawyer for Goldman Sachs. That people move from representing the presidency to representing banks is so normal that we forget the costs: the private job done with the savvy to outfox one’s former public-sector colleagues, the public job done gently to keep open doors.

In some exchanges in 2014, Ms. Ruemmler appears to be contemplating a job offer: attorney general of the United States, according to contemporary reports. And who does she seek advice from? A convicted sex offender.

In another email, Mr. Epstein asks a legal question about whether Mr. Trump can declare a national emergency to build a border wall. She responds that a prospective employer has offered her a $2 million signing bonus. The glide from tyranny to bonus distills a core truth: Regardless of what happens, the members of this social network will be fine.

Ms. Ruemmler told Mr. Epstein she was going to New York one day. “I will then stop to pee and get gas at a rest stop on the New Jersey Turnpike, will observe all of the people there who are at least 100 pounds overweight, will have a mild panic attack as a result of the observation, and will then decide that I am not eating another bite of food for the rest of my life out of fear that I will end up like one of these people,” she wrote in 2015.

But in the class of permanent survivors, today’s jump scare may yield to tomorrow’s opportunity. A few years after she joined the company, Goldman Sachs declared anti-obesity drugs a “$100 billion opportunity.”

Generally, you can’t read other people’s emails. Powerful people have private servers, I.T. staffs, lawyers. When you get a rare glimpse into how they actually think and view the world, what they actually are after, heed Maya Angelou: Believe them.

American democracy today is in a dangerous place. The Epstein emails are a kind of prequel to the present. This is what these powerful people, in this mesh of institutions and communities, were thinking and doing — taking care of one another instead of the general welfare — before it got really bad.

This era has seen a surge in belief in conspiracy theories, including about Mr. Epstein, because of an underlying intuition people have that is, in fact, correct: The country often seems to be run not for the benefit of most of us.

Shaming the public as rubes for succumbing to conspiracy theories misses what people are trying to tell us: They no longer feel included in the work of choosing their future. On matters small and big, from the price of eggs to whether the sexual abuse of children matters, what they sense is a sneering indifference. And a knack for looking away.

Now the people who capitalized on the revolt against an indifferent American elite are in power, and, shock of all shocks, they are even more indifferent than anyone who came before them. The clubby deal-making and moral racketeering of the Epstein class is now the United States’ governing philosophy.

In spite of that, the unfathomably brave survivors who have come forward to testify to their abuse have landed the first real punch against Mr. Trump. In their solidarity, their devotion to the truth and their insistence on a country that listens when people on the wrong end of power cry for help, they shame the great indifference from above. They point us to other ways of relating.

Winners Take All: The Elite Charade of Changing the World ” and the publisher of the newsletter The.Ink .]
h

Why Walmart Wants To See the Starbucks Barista Strike Fail

Portside
portside.org
2025-12-12 03:57:18
Why Walmart Wants To See the Starbucks Barista Strike Fail jay Thu, 12/11/2025 - 22:57 ...
Original Article

Thousands of Starbucks workers across a hundred cities are nearly one month into an expanding, nationwide unfair labor practice strike in protest of the coffee giant’s “historic union busting and failure to finalize a fair union contract,” according to Starbucks Workers United, the barista union that has spread to over 650 stores since its birth in Buffalo four years ago .

The strike comes after years of illegal anti-union antics by Starbucks and follows a historic $39 million settlement announced on December 1 for more than 500,000 labor violations committed by Starbucks management in New York City since 2021.

The rise of Starbucks Workers United has energized the U.S. labor movement, as the struggle to unionize the mega-chain represents far more than baristas pitted against managers: Starbucks is a trend-setting global powerhouse and one of the top U.S. employers . Current fights at places like Starbucks and Amazon will shape the labor movement for decades to come.

This is well understood by industry leaders, in no small part because of Starbucks’s deep interlocks with major corporations across numerous sectors. At its highest levels of governance and management, Starbucks’s closest industry ally may be Walmart, the top U.S. corporate employer and a long-time anti-union stalwart. Starbucks and Walmart, along with other corporations represented on Starbucks’s board of directors, also support major industry groups that carry out the retail and service sectors’ wider agenda of weakening unions.

Moreover, while Starbucks positions itself as a leader on climate and sustainability, it recently brought a longtime board director of oil giant Chevron onto its board, a move that lends legitimacy to accusations of hypocrisy leveled by baristas against the company.

All told, striking baristas are not merely up against the executives of a coffee store behemoth, but a broader constellation of corporate power fully networked into Starbucks’s top leadership.

The New Starbucks Regime

In September 2024, Starbucks hired Brian Niccol as its new CEO — its fourth since 2022. Starbucks sales were stagnant , and Niccol, who had been Chipotle’s CEO since 2018, had a reputation as a successful food service executive. Starbucks’s stock shot up a record 24 percent with the news of Niccol’s hiring.

Under his watch at Chipotle, the company paid $240,000 to workers who sought to unionize a shop in Augusta, Maine, that the company shuttered , and Chipotle was accused of withholding raises from unionizing workers in Lansing, Michigan.

Nearly 500 Starbucks stores had unionized by the time Niccol took over. The new Starbucks’s CEO emphasized boosting sales at stores and promised “high-quality handcrafted beverage to our cafe customers in four minutes or less” — experienced by baristas as speed-ups and surveillance.

Niccol’s total compensation package last year as Starbucks CEO was an astounding $95.8 million . The AFL-CIO ranked Niccol as the fifth-highest paid CEO of 2024, and Starbucks’s 2024 CEO-to-worker pay ratio was an astronomical 6,666-to-1 .

Niccol’s also garnered controversy — and the ire of baristas — for accepting a company-paid remote office in Newport Beach, California, and commuting 1,000 miles on Starbucks’s corporate jet to its Seattle headquarters.

The highest governing body over Starbucks — which hired Niccol and can fire him — is the company’s board of directors. Mirroring the CEO turnover, the majority of Starbucks’s board is today composed of new faces compared to just a few years ago .

For the prior 20 years , the Starbucks board had been anchored by Mellody Hobson, who also sits on the board of JPMorgan Chase, the U.S.’s top bank, and is married to billionaire filmmaker George Lucas.

Today, the major corporate ties represented on Starbucks’s board through current or recent past executive or director positions cut across industries, from telecoms ( T-Mobile and AT&T ) to tech ( YouTube and Yahoo ), agriculture ( Land O’ Lakes ) to apparel ( Nike ), hotels ( Hilton ) to finance ( BlackRock ), and much more. The board also reflects Starbuck’s global scope, with representatives from prominent companies in China (Alibaba), Latin America (Grupo Bimbo), and Europe (LEGO).

The Starbucks-Walmart Nexus

Starbucks’s leadership has a close alliance with another anti-union retail powerhouse: Walmart.

Most notably, Starbucks CEO Brian Niccol is simultaneously a board director of Walmart. Niccol joined Walmart’s board in June 2024, replacing Rob Walton, the son of Walmart founder Sam Walton who had served on the company’s board for over three decades, mostly as its chairman.

But that’s not the only Walmart connection: Starbucks board director Marissa A. Mayer, who became a Starbucks board director in June 2025 , has sat on the retail giant’s board since 2012 . Niccol was compensated with $274,973 by Walmart in 2025, and Mayer made $299,973. Mayer currently owns 129,642 shares of Starbucks stock, worth around $11 million.

As Walmart directors, Niccol and Mayer are swimming among the heights of billionaire power. The Walton family — who effectively owns Walmart with a 45 percent company stake — is worth $267 billion , and two Walton family members sit on Walmart’s board, including its chairman Greg Penner , who is married to Carrie Walton Penner, the daughter of Rob Walton.

Additionally, Mellody Hobson — again, who left the Starbucks board just a few months ago after a 20 year stint — is also part of the Walton-Penner Family Ownership Group that purchased the National Football League’s Denver Broncos in 2022.

Like Starbucks, Walmart is notorious for its union busting and ability to hold down the wage floor, though its wages have risen in recent years as it was “in the crosshairs of labor activists” and trying to reduce employee turnover, according to the Wall Street Journal .

Just recently, in 2024, the National Labor Relations Board alleged that the retail giant interrogated and threatened pro-union workers at a store in Eureka, California.

As the biggest employers in their respective industries, corporations like Walmart and Starbucks, as well as other top non-union employers like Amazon and Home Depot, understand unions as existential threats, and they’ve historically aimed to crush emerging beachheads through illegal firings, store closures, and endless bargaining delays.

Industry Groups Against Unionization

Starbucks and Walmart’s united front against workers is also reflected in their joint dedication to lobbying and policy groups that carry out the industry’s wider anti-union agenda.

A compelling example of this is the Retail Industry Leaders Association (RILA), one of the leading industry groups for major corporate retailers. While companies carry out their own individual lobbying efforts, they pool their resources into groups like RILA to advance their general interests as an industry.

RILA is dedicated to weakening labor unions and supporting anti-labor campaigns. It spends millions on federal lobbying annually to defend corporate interests around taxation and regulation and to fight pro-labor measures like the Protecting the Right to Organize (PRO) Act.

RILA’s 2025 policy agenda advocates “redesign and purs workforce policies and practices to reimagine outdated labor laws.” In 2024, it warned of workers at Amazon and Starbucks winning their first contracts, which “are the holy grail because unions, once embedded, rarely relinquish their hold.”

Both Walmart and Starbucks are RILA members , and Starbucks current and historic ties to RILA run deep. Former Starbucks board director Mary Dillon is the former chair of RILA. Additional companies represented through Starbucks board, like Nike and Williams-Sonoma, are also RILA members.

Starbucks, Walmart, and other corporations represented on Starbucks’s board are also tied to other major anti-union industry groups, such as the National Retail Federation (NRF) and National Restaurant Association (NRA).

While the membership rolls of these groups are not disclosed, corporations like Walmart and Starbucks feature prominently in their leadership and activity. For example, Walmart sits on the anti-union NRF’s board , and the group has supported litigation aimed at combating Starbucks Workers United.

Starbucks CEO Brian Niccol is also a board director of Walmart, among several close connections between the two huge anti-union corporations and their industry groups.

Climate Hypocrisy

Unionizing baristas have long criticized Starbucks for describing its employees as “partners” and adopting a “progressive” veneer while overseeing a fierce anti-union campaign. But the company’s hypocrisy arguably stretches to another area where it claims moral high ground: climate and sustainability.

In June 2025, Starbucks brought in Dambisa Moyo, a longtime board director of Chevron, the second-largest U.S. oil company, as a board member. Moyo has served on Chevron’s board since 2016. In 2024 alone, she took in $457,604 in compensation from Chevron for her board role. According to her most recent disclosure, she owns more than $2.1 million in Chevron stock.

In a 2020 interview , Moyo said it was “very shortsighted” and “naive” for “people to be campaigning for defunding” fossil fuel companies like Chevron that she said “can potentially find solutions to the climate change crisis.”

Since then, Chevron and other Big Oil majors have doubled down on fossil fuel extraction and slashed their low carbon investments, while their climate pledges have garnered criticism. Chevron ranks at the 21st top U.S. greenhouse gas polluter, according to the UMass Political Economy Research Institute’s most recent “ Polluters Index .” A 2019 investigation found that Chevron was the world’s second-biggest emitter of carbon dioxide equivalent since 1965.

Moyo has also held board roles at corporations like 3M, which has paid out hundreds of millions of dollars in settlements tied to its production of cancer-causing PFAS “ forever chemicals ,” and Barrick Gold Corporation, which engages in gold and copper extraction and has faced accusations of human rights violations .

While Starbucks has won industry praise for its sustainability gestures, the decision to bring on Moyo, a clear defender of fossil fuel companies who has millions personally invested in Big Oil stock, raises alarm about the coffee giant’s climate commitments.

Common Foes

Other ongoing labor struggles share common opponents with Starbucks Workers United.

For example, labor unions and community groups in Los Angeles are organizing against displacement and heightened policing, and for living wages, housing protections, and immigrant rights. Their organizing efforts are framed around the 2028 Summer Olympics, which will be held in LA.

Some of the same corporate actors driving Starbucks are overseeing the LA 2028 games. Longtime Starbuck director and former top Nike executive Andy Campion is a board director of the committee organizing LA’s hosting of the 2028 Olympics, while former Starbucks director Hobson is also a LA2028 board member. Starbucks is a “ Founding Partner ” of the LA2028 games.

Starbucks also historically has strong interlocks with Big Tech, and some Starbucks directors — such as Neal Mohan, the CEO of YouTube, which is owned by Google and its parent company Alphabet — are powerful figures in Silicon Valley. Recent former Starbucks directors also include Microsoft CEO Satya Nadella and Clara Shih , head of Meta’s business AI division.

In recent years, tech workers have been facing off against some of these Starbucks-linked tech CEOs by organizing through unions like Alphabet Workers Union and campaigns like No Tech for Apartheid .

All told, while the ongoing barista strike is part of the larger struggle to unionize Starbucks, it also represents something much broader: a pitched battle against an executive and governance regime interlocked with a wider network of corporate power whose tentacles stretch far behind a chain of coffee shops.

[ Derek Seidma n is a writer, researcher and historian living in Buffalo, New York. He is a regular contributor for Truthout and a contributing writer for LittleSis. ]

This article is licensed under Creative Commons (CC BY-NC-ND 4.0) , and you are free to share and republish under the terms of the license.

Truthout provides daily news, in-depth reporting and critical analysis. To keep up-to-date, sign up for our newsletter by clicking here !

Truthout boldly publishes the latest from movements for racial justice, LGBTQ rights, prison abolition, abortion access, and more. Anything you give supports this work directly. Please consider a tax-deductible donation today .

Truthout is a vital news source and a living history of political struggle. If you think our work is valuable, support us with a donation of any size.

Truthout’s December fundraiser is our most important of the year and will determine the scale of work we will be able to do in 2026. Please support us with a tax-deductible donation today.

Are we stuck with the same Desktop UX forever?

Lobsters
www.youtube.com
2025-12-12 03:55:02
Comments...

Being a SysAdmin is hard

Lobsters
about.tree.ht
2025-12-12 03:35:25
Comments...
Original Article

Ugh.

A monitoring dashboard showing an uptime of 74.71%

Ok, so Treehut has had a lot of downtime recently. I am frustrated about that. The underlying causes are not easily addressable. I want Treehut to be stable, reliable, and trustworthy, as I and a few others use it to store a lot of code. These two things are in conflict. It feels bad.

What happened?

Well, there were two separate outages, and before I can really explain them it's worth providing context about how Treehut currently operates: on consumer grade PC hardware in my closet. It's perfectly decent consumer hardware, and I'm pretty sure that the hardware choices did not in any way cause the downtime; the "in my closet" bit is far more relevant here.

I have bog-standard home internet. It's 2025 and I'm lucky enough to live in an area with decent internet options, so bog-standard is 1Gbps symmetric fiber, but it does not include some niceties that would simplify my life, like a static IP address. The lack of a static IP address means I cannot host Treehut entirely on my own hardware or network, and that I am reliant on someone else providing the static IP address and funneling traffic to me. Rather than rely on Cloudflare Tunnels or DynDNS or some other thing that adds a bit too much magic to the setup for my taste, I run Caddy on a cheap virtual machine using a cloud provider that is not the side hustle of an evil mega corporation. Caddy acts as a reverse proxy to Pecha, the server which hosts Treehut and it's storage pool. It is able to route traffic to Pecha despite Pecha not being on the public internet because of Tailscale.

Tailscale is a neat little tool that has caused me tremendous pain and has given me too many headaches to count. It is a terrible single point of failure that is very hard to remove from the setup without some drastic changes to architecture. I don't want to recant the scathing details of my past horrors with it, but suffice to say that it pains me to have Tailscale as a critical part of Treehut's infrastructure.

The specific issue I had with it this time is so horribly mundane and frustrating that I don't really know what to say about it or how to mitigate it going forward: the Tailscale container on Pecha crashed. The "fix" was to log in to a dashboard and click "Start" on the container to bring it back up. This caused nearly a full week of downtime.

Wait, what? Why?

Because as I mentioned, Tailscale is a terrible single point of failure. Also I was in Canada.

As people do sometimes, I took a vacation. I went to Banff with my wife and some dear friends who live very far away and I don't get to see very often. We all had a lovely time, thank you for asking. :)

You may recall tho, I said that Treehut's internal network does not have a static IP address and cannot be routed to from the public internet; you need Tailscale. Tailscale, which died.

Technically, the Tailscale container died while I was at home packing. I got an email notifying me of the outage, because I do have some monitoring set up, but I did not see it until I was in Canada because I was more focused on traveling internationally for the first time than checking my email. 😅 If I wasn't preparing for vacation this outage would've probably lasted an hour at most. But I was, and it was a nice long trip, so now I'm here screaming at the Internet about it.

Anyway, I got home from the trip, took some time to unpack and decompress, and then got to troubleshooting. I clicked the big green "fix everything" button and lived happily ever after. Well, until...

What happened? (Part 2)

Like I said, there were two outages. This time the entire server hung. It stopped responding to any and all network traffic. I was actually home this time, saw the email, and I've got to admit, I had a long day at work and wanted to play Pokémon Legends Z-A Nintendo Switch 2 Edition Mega Dimension so that I could collect some little guys and unwind. This caused 23 hours of downtime.

I am an admin team of one at my little venture here. I don't necessarily want to be, but I can't afford to pay myself a salary for this, let alone anyone else, and also it's hard to give anyone else the necessary level of access to the servers for resolving issues like this (re: them being in my closet).

Anyway, when I finally had the time and energy to look into the situation I connected to the JetKVM which I use to manage the server and tried to see if I could get a shell on the machine since I tried and failed to get a remote one. I could not. Key strokes weren't registering, ping was timing out, and so I didn't have a lot of options. I connected to the server using the Sneakernet and forcibly restarted the server. Once it came back to life I restarted the services and everything is now operating normally. I ran a data integrity check and got a clean bill of health.

Will this happen again?

Idk! Probably! I hope not! In particular, the second outage that required physically interrupting power has me worried. I cannot find any explanation as to what caused the server to hang and that's a terrifying feeling.

To be honest, I don't really know what to do right now. This moment really has me feeling like I should've just gone all in on a cloud deployment instead. But I liked the idea of being able to host everything but the reverse proxy at home on solar power, and also of having a bunch of little computers in my closet doing my bidding.

I'm frustrated because I know I'm doing a lot of things right. I have data replication, and I have firewalls and intrusion detection, and I have a polished deployment procedure for updating Treehut with minimal downtime despite not having the appropriate infrastructure for true "high availability" yet, and I'm working on having true high availability, even! Yet my monitoring setup leaves a lot to be desired, and I don't have enough glass to break in case of emergencies.

I have a lot of reasons for running Treehut, but one of the goals is just to see what it's like to try and run a service yourself while aiming for several nines of uptime. It turns out that it's, like, really fucking hard. I have spent a ton of time and money on Treehut, so it sucks to see how far I obviously still have to go. It sucks, but it's part of the learning process I suppose. I have some ideas about how I can improve things, but it seems like I've lost all of my "nine privileges" for a while.

Back to blog

A Lisp Interpreter Implemented in Conway’s Game of Life

Lobsters
woodrush.github.io
2025-12-12 03:11:31
Comments...
Original Article

A screenshot of the Lisp in Life architecture.

Lisp in Life is a Lisp interpreter implemented in Conway’s Game of Life.

The entire pattern is viewable on the browser here .

To the best of my knowledge, this is the first time a high-level programming language was interpreted in Conway’s Game of Life.

Running Lisp on the Game of Life

Lisp is a language with a simple and elegant design, having an extensive ability to express sophisticated ideas as simple programs. Notably, the powerful feature of macros could be used to modify the language’s syntax to write programs in a highly flexible way. For example, macros can be used to introduce new programming paradigms to the language, as demonstrated in object-oriented-like.lisp (which can actually be evaluated by the interpreter, although complex programs take quite a long time to finish running), where a structure and syntax similar to classes in Object Oriented Programming is constructed. Despite the expressibility of Lisp, it is the world’s second oldest high-level programming language introduced in 1958, only to be preceded by Fortran.

Conway’s Game of Life is a cellular automaton proposed in 1970. Despite it having a very simple set of rules, it is known to be Turing Complete. Lisp in Life demonstrates this fact in a rather straightforward way.

How can simple systems allow human thoughts to be articulated and be expanded? With the expressibility of Lisp and the basis of Conway’s Game of Life, Lisp in Life provides an answer to this question.

Input and Output

The Lisp program is provided by editing certain cells within the pattern to represent the ASCII-encoding of the Lisp program. The pattern directly reads this text and evaluates the results. You can also load your own Lisp program into the pattern and run it. The standard output is written at the bottom end of the RAM module, which can be easily located and directly examined in a Game of Life viewer. The Lisp implementation supports lexical closures and macros, allowing one to write Lisp programs in a Lisp-like taste, as far as the memory limit allows you to.

The Lisp interpreter is written in C. Using the build system for this project, you can also compile your own C11-compatible C code and run in on Conway’s Game of Life.

Previous Work

As previously mentioned, to the best of my knowledge, this is the first time a high-level programming language was interpreted in Conway’s Game of Life.

The entry featuring Universal Computers in LifeWiki has a list of computers created in the Game of Life. Two important instances not mentioned in this entry are the Quest For Tetris (QFT) Project created by the authors of the QFT project, and APGsembly created by Adam P. Goucher. All of these work are designed to run an assembly language and are not designed to interpret a high-level language per se.

An example of a compiled high-level language targeted for the Game of Life is Cogol by the QFT project. Cogol is compiled to the assembly language QFTASM, targeted for the QFT architecture. Although Cogol is targeted for the QFT architecture, it requires compilation to QFTASM for the code to be run in the QFT architecture.

In Lisp in Life, a modified version of the QFT architecture is first created for improving the pattern’s runtime. Modifications include introducing a new cascaded storage architecture for the ROM, new opcodes, extending the ROM and RAM address space, etc. The Lisp source code is then written into the computer’s RAM module as its raw binary ASCII format. The Conway’s Game of Life pattern directly reads, parses, and evaluates this Lisp source code to produce its output. This feature of allowing a Conway’s Game of Life pattern to evaluate a high-level programming language expressed as a string of text is a novel feature that was newly achieved in this project.

Video

Here is a YouTube video showing Lisp in Life in action:

YouTube video of Lisp in Life.

Screenshots

An overview of the entire architecture.

An overview of the entire architecture.

An overview of the CPU and its surrounding units.

An overview of the CPU and its surrounding modules. On the top are the ROM modules, with the lookup module on the right, and the value modules on the left. On the bottom left is the CPU. On the bottom right is the RAM module.

This pattern is the VarLife version of the architecture. VarLife is an 8-state cellular automaton defined in the Quest For Tetris (QFT) Project, which is used as an intermediate layer to create the final Conway’s Game of Life pattern. The colors of the cells indicate the 8 distinct states of the VarLife rule.

The architecture is based on Tetris8.mc in the original QFT repository . Various modifications were made to make the pattern compact, such as introducing a new lookup table architecture for the ROM, removing and adding new opcodes, expanding the ROM and RAM address space, etc.

The Conway's Game of Life version of the architecture, converted from the VarLife pattern.

The Conway’s Game of Life version of the architecture, converted from the VarLife pattern. What appears to be a single cell in this image is actually an OTCA metapixel zoomed away to be shown 2048 times smaller.

A close-up view of a part of the ROM module in the Conway's Game of Life version.

A close-up view of a part of the ROM module in the Conway’s Game of Life version. Each pixel in the previous image is actually this square-shaped structure shown in this image. These structures are OTCA metapixels , which can be seen to be in the On and Off meta-states in this image. The OTCA Metapixel is a special Conway’s Game of Life pattern that can emulate cellular automatons with customized rules. The original VarLife pattern is simulated this way so that it can run in Conway’s Game of Life.

The OTCA Metapixel simulating Life in Life can be seen in this wonderful video by Phillip Bradbury: https://www.youtube.com/watch?v=xP5-iIeKXE8

A video of the RAM module of the computer in the VarLife rule in action.

A video of the RAM module in the VarLife rule in action.

The computer showing the results of the computation of `(print (* 3 14))`.

The computer showing the results of the following Lisp program:

(define mult (lambda (m n)
  (* m n)))

(print (mult 3 14))

The result is 42 , shown in binary ascii format ( 0b110100 , 0b110010 ), read in bottom-to-up order.

As shown in this image, the standard output of the Lisp program gets written at the bottom end of the RAM module, and can be directly viewed in a Game of Life viewer. This repository also contains scripts that run on Golly to decode and view the contents of the output as strings.

How is it Done?

The build flow of Lisp in Life.

The Lisp interpreter , written in C, is compiled to an assembly language for a CPU architecture implemented in the Game of Life, which is a modification of the computer used in the Quest For Tetris (QFT) project. The compilation is done using an extended version of ELVM (the Esoteric Language Virtual Machine). The Game of Life backend for ELVM was implemented by myself.

Generating a small enough pattern that runs in a reasonable amount of time required a lot of effort. This required optimizations and improvements in every layer of the project; a brief summary would be:

  • The C Compiler layer - adding the computed goto feature to the C compiler, preserving variable symbols to be used after compilation, etc.
  • The C layer (the Lisp interpreter ) - using a string hashtable and binary search for Lisp symbol lookup, minimization of stack region usage with union memory structures, careful memory region map design, etc.
  • The QFTASM layer - writing a compiler optimizer to optimize the length of the assembly code
  • The VarLife layer (the CPU architecture) - creating a lookup table architecture for faster ROM access, expanding the size and length of the RAM module, adding new opcodes, etc.
  • The Game of Life layer - Hashlife -specific optimization

A more detailed description of the optimizations done in this project is available in the Implementation Details section.

Conversion from VarLife to Conway’s Game of Life

VarLife is an 8-state cellular automaton defined in the Quest For Tetris (QFT) Project. It is used as an intermediate layer to generate the final Conway’s Game of Life pattern; the computer is first created in VarLife, and then converted to a Game of Life pattern.

When converting VarLife to Conway’s Game of Life, each VarLife cell is mapped to an OTCA Metapixel (OTCAMP). The conversion from VarLife to the Game of Life is done in a way so that the behavior of the states of the VarLife pattern matches exactly with the meta-states of the OTCA Metapixels in the converted Game of Life pattern. Therefore, it is enough to verify the behavior of the VarLife pattern to verify the behavior of the Game of Life pattern.

Due to the use of OTCA Metapixels, each VarLife cell becomes extended to a 2048x2048 Game of Life cell, and 1 VarLife generation requires 35328 Game of Life generations. Therefore, the VarLife patterns run significantly faster than the Game of Life (GoL) version.

Additional details on VarLife are available in the Miscellaneous section.

Pattern Files

Program VarLife Pattern Conway’s Game of Life Pattern
print.lisp QFT_print.mc QFT_print_metafied.mc
lambda.lisp QFT_lambda.mc QFT_lambda_metafied.mc
printquote.lisp QFT_printquote.mc QFT_printquote_metafied.mc
factorial.lisp QFT_factorial.mc QFT_factorial_metafied.mc
z-combinator.lisp QFT_z-combinator.mc QFT_z-combinator_metafied.mc
backquote-splice.lisp QFT_backquote-splice.mc QFT_backquote-splice_metafied.mc
backquote.lisp QFT_backquote.mc QFT_backquote_metafied.mc
object-oriented-like.lisp QFT_object-oriented-like.mc QFT_object-oriented-like_metafied.mc
primes-print.lisp QFT_primes-print.mc QFT_primes-print_metafied.mc
primes.lisp QFT_primes.mc QFT_primes_metafied.mc

Pattern files preloaded with various Lisp programs are available here. Detailed statistics such as the running time and the memory consumption are available in the Running Times and Statistics section.

The patterns can be simulated on the Game of Life simulator Golly .

The VarLife patterns can be simulated on Golly as well. To run the VarLife patterns, open Golly and see File -> Preferences -> Control, and Check the “Your Rules” directory. Open the directory, and copy https://github.com/woodrush/QFT-devkit/blob/main/QFT-devkit/Varlife.rule to the directory.

Descriptions of the Lisp Programs

  • object-oriented-like.lisp : This example creates a structure similar to classes in Object-Oriented Programming, using closures.

    • The class has methods and field variables, where each instance carries distinct and persistent memory locations of their own. The example instantiates two counters and concurrently modifies the value held by each instance.
    • New syntaxes for instantiation and method access, (new classname) and (. instance methodname) , are introduced using macros and functions.

    The Lisp interpreter’s variable scope and the macro feature is powerful enough to manage complex memory management, and even providing a new syntax to support the target paradigm.

  • printquote.lisp : A simple demonstration of macros.

  • factorial.lisp : A simple demonstration of recursion with the factorial function.

  • z-combinator.lisp : Demonstration of the Z Combinator to implement a factorial function using anonymous recursion .

  • backquote-splice.lisp : Implements the backquote macro used commonly in Lisp to construct macros. It also supports the unquote and unquote-splice operations, each written as ~ and ~@ .

  • primes.lisp : Prints a list of prime numbers up to 20. This example highlights the use of the while syntax.

The contents of print.lisp is quite straightforward - it calculates and prints the result of 3 * 14 . backquote.lisp and primes-print.lisp are similar to backquote-splice.lisp and primes.lisp, mainly included for performance comparisons. backquote.lisp doesn’t implement the unquote-splice operation, and demonstrates some more examples. primes-print.lisp reduces the number of list operations to save memory usage.

Details of the Lisp Interpreter

Special Forms and Builtin Functions

  • define
  • if
  • quote
  • car, cdr
  • cons
  • list
  • atom
  • print
  • progn
  • while
  • lambda, macro
  • eval
  • eq
  • +, -, *, /, mod, <, >

Lexical Closures

This Lisp interpreter supports lexical closures. The implementation of lexical closures is powerful enough to write an object-oriented-like code as shown in object-oriented-like.lisp , where classes are represented as lexical closures over the field variables and the class methods.

Macros

This Lisp interpreter supports macros. Lisp macros can be thought as a function that receives code and returns code. Following this design, macros are treated exacly the same as lambdas, except that it takes the arguments as raw S-expressions, and evaluates the result twice (the first time to build the expression, and the second time to actually evaluate the builded expression).

Running Times and Statistics

VarLife Patterns

Lisp Program and Pattern (VarLife) #Halting Generations (VarLife) Running Time (VarLife) Memory Usage (VarLife)
print.lisp [ pattern ] 105,413,068 (exact) 1.159 mins 5.0 GiB
lambda.lisp [ pattern ] 700,000,000 2.966 mins 12.5 GiB
printquote.lisp [ pattern ] 800,000,000 3.424 mins 12.5 GiB
factorial.lisp [ pattern ] 1,000,000,000 5.200 mins 17.9 GiB
z-combinator.lisp [ pattern ] 1,700,000,000 9.823 mins 23.4 GiB
backquote-splice.lisp [ pattern ] 4,100,000,000 20.467 mins 27.5 GiB (max.)
backquote.lisp [ pattern ] 4,100,000,000 21.663 mins 27.5 GiB (max.)
object-oriented-like.lisp [ pattern ] 4,673,000,000 22.363 mins 27.5 GiB (max.)
primes-print.lisp [ pattern ] 8,880,000,000 27.543 mins 27.5 GiB (max.)
primes.lisp [ pattern ] 9,607,100,000 38.334 mins 27.5 GiB (max.)

Conway’s Game of Life (GoL) Patterns

Lisp Program and Pattern (GoL) #Halting Generations (GoL) Running Time (GoL) Memory Usage (GoL)
print.lisp [ pattern ] 3,724,032,866,304 382.415 mins 27.5 GiB (max.)
lambda.lisp [ pattern ] 24,729,600,000,000 1372.985 mins 27.5 GiB (max.)
printquote.lisp [ pattern ] 28,262,400,000,000 1938.455 mins 27.5 GiB (max.)
factorial.lisp [ pattern ] 35,328,000,000,000 3395.371 mins 27.5 GiB (max.)
z-combinator.lisp [ pattern ] 60,057,600,000,000 - -
backquote-splice.lisp [ pattern ] 144,844,800,000,000 - -
backquote.lisp [ pattern ] 144,844,800,000,000 - -
object-oriented-like.lisp [ pattern ] 165,087,744,000,000 - -
primes-print.lisp [ pattern ] 313,712,640,000,000 - -
primes.lisp [ pattern ] 339,399,628,800,000 - -

Common Statistics

Lisp Program #QFT CPU Cycles QFT RAM Usage (Words)
print.lisp 4,425 92
lambda.lisp 13,814 227
printquote.lisp 18,730 271
factorial.lisp 28,623 371
z-combinator.lisp 58,883 544
backquote-splice.lisp 142,353 869
backquote.lisp 142,742 876
object-oriented-like.lisp 161,843 838
primes-print.lisp 281,883 527
primes.lisp 304,964 943

The running times for each program are shown above. The Hashlife algorithm used for the simulation requires a lot of memory in exchange of speedups. The simulations were run on a 32GB-RAM computer, with Golly’s memory usage limit set to 28000 MB, and the default base step to 2 (configurable from the preferences). The memory usage was measured by Ubuntu’s activity monitor. “(max.)” shows where the maximum permitted memory was used. The number of CPU cycles and the QFT memory usage was obtained by running the QFTASM interpreter on the host PC. The QFT memory usage shows the number of RAM addresses that were written at least once. The memory usage is measured in words, which is 16 bits in this architecture.

All of the VarLife patterns can actually be run on a computer. The shortest running time is about 1 minute for print.lisp . A sophisticated program such as object-oriented-like.lisp can even run in about 22 minutes.

On the other hand, the Game of Life patterns take significantly more time than the VarLife patterns, but for short programs it can be run in a moderately reasonable amount of time. For example, print.lisp finishes running in about 6 hours in the Game of Life pattern. As mentioned in the “Conversion from VarLife to Conway’s Game of Life” section, since the Game of Life pattern emulates the behavior of the VarLife pattern using OTCA Metapixels, the behavior of the Game of Life patterns can be verified by running the VarLife patterns.

Tests

There are tests to check the behavior of the Lisp interpreter. There is a test for checking the QFTASM-compiled Lisp interpreter using the QFTASM interpreter, and a test for checking the GCC-compiled Lisp interpreter on the host pc. To run these tests, use the following commands:

git submodule update --init --recursive # Required for building the source

make test             # Run the tests for the QFTASM-compiled Lisp interpreter, using the QFTASM interpreter
make test_executable  # Run the tests for the executable compiled by GCC

Running make test requires Hy , a Clojure-like Lisp implemented in Python available via pip install hy . Some of the tests compare the output results of Hy and the output of the QFTASM Lisp interpreter.

The tests were run on Ubuntu and Mac.

Building from Source

This section explains how to load the Lisp interpreter (written in C) to the Game of Life pattern, and also how to load a custom Lisp program into the pattern to run it on Game of Life.

Please see build.md from the GitHub repository.

Implementation Details

This section describes the implementation details for the various optimizations for the QFT assembly and the resulting Game of Life pattern.

The C Compiler layer

  • Added the computed goto feature to ELVM
    • This was merged into the original ELVM project.
  • Modified the compiler to preserve and output memory address symbols and program address symbols, for their usage in the compiler optimization tool in the QFTASM layer
    • This allows to use memheader.eir , so that symbols used in the C source can be referenced in the ELVM assembly layer using the same variable symbols.

The ELVM Assembly layer

  • Wrote the QFTASM backend for ELVM
    • This was merged into the original ELVM project.
  • Added further improvements to the QFTASM backend:
    • Let the ELVM assembly’s memory address space match QFT’s native memory address space
      • Originally, the ELVM assembly had to convert its memory address every time when a memory access occurs.
    • Support new opcodes added in the improved QFT architecture

The C layer (the implementation of the Lisp interpreter)

Usage of binary search and hashtables for string representations and comparisons

By profiling the GCC-compiled version of the Lisp interpreter, it was found that the string table lookup process was a large performance bottleneck. This was a large room for optimization.

The optimized string lookup process is as follows. First, when the Lisp parser accepts a symbol token, it creates a 4-bit hash of the string with the checksum of the ASCII representation of the string. The hash points to a hashtable that holds the root of a binary search tree for string comparison. Each node in the tree holds the string of the symbol token, and two nodes that are before and after the token in alphabetical order. When a query symbol token arrives in the parsing phase, a node with a matching token is returned, or a new node for the token is added into this binary tree if the token does not exist yet. This allows for each distinct symbol in the S-expression to have a distinct memory address.

In the interpretation phase, since each distinct symbol has a distinct memory address, and every string required for the Lisp program has already been parsed, string comparison can be done by simply comparing the memory address of the tokens. Since the interpreter only uses string equality operations for string comparison, simply checking for integer equality suffices for string comparison, speeding up the interpretation phase. Since the hash key is 4 bits long, this allows for reducing 4 searches in the binary tree compared to using a single binary tree.

Usage of jump hash tables for the special form evaluation procedure searches

There are 17 distinct procedures for evaluating the special forms in the Lisp interpreter, define , if , quote , car , cdr , cons , atom , print , progn , while , { lambda , macro }, eval , eq , { + , - , * , / , mod }, { < , > }, list , and lambda/macro invocations (when if the token is not a special form). Using an if statement to find the corresponding procedure for a given token becomes a linear search for the token comparisons. To speed up this search process, a hash table is created for jumping to the corresponding procedures. Since the memory addresses for the special forms can be determined before parsing the Lisp program, all of the symbols for the special forms have a fixed memory address. Therefore, the hash key can be created by subtracting an offset to the symbol’s memory address, to point to a hashtable that is created near the register locations. This hashtable is provided in memheader.eir . When the hash key is larger than the regions of this hashtable, it means that the symbol is not a special form, so the evaluation jumps to the lambda/macro invocation procedure.

The Lisp implementation has 3 distinct value types, ATOM , INT , and LAMBDA . Each value only consumes one QFT byte of memory; the ATOM value holds the pointer to the symbol’s string hashtable, the INT value holds the signed integer value, and LAMBDA holds a pointer to the Lambda struct, as well as its subtype information, of either LAMBDA , MACRO , TEMPLAMBDA and TEMPMACRO . (The TEMPLAMBDA and TEMPMACRO subtypes are lambda and macro types that recycles its argument value memory space every time it is called, but is unused in the final lisp programs.) Since the RAM’s address space is only 10 bits, there are 6 free bits that can be used for addresses holding pointers. Therefore, the value type and subtype information is held in these free bits. This makes the integer in the Lisp implementation to be a 14-bit signed integer, ranging from -8192 to 8191.

Minimization of Stack Region Usage

Since the C compiler used in this project does not have memory optimization features, this has to be done manually within the C source code. This led to the largest reason why the interpreter’s source code seems to be obfuscated.

One of the largest bottlenecks for memory access was stack region usage. Every time a stack region memory access occurs, the assembly code performs memory address offset operations to access the stack region. This does not happen when accessing the heap memory, since there is only one heap region used in the entire program, so the pointers for global variables can be hard-coded by the assembler. Therefore, it is favorable optimization-wise to use the heap memory as much as possible.

One way to make use of this fact is to use as much global variables as possible. Since registers and common RAM memory share the same memory space, global variables can be accessed with a speed comparable to registers (However, since the physical location of the RAM memory slot within the pattern affects the I/O signal arrival time, and the registers have the most smallest RAM addresses, i.e. they are the closest to the CPU unit, the registers have the fastest memory access time).

Another method of saving memory was to use union memory structures to minimize the stack region usage. In the C compiler used in this project, every time a new variable is introduced in a function, the function’s stack region usage (used per call) is increased to fit all of the variables. This happens even when two variables never appear at the same time. Therefore, using the fact that some variables never appear simultaneously, unions are used for every occurence of such variables, so that they can use a shared region within the stack space. This led to minimization of the stack region usage. Since the stack region is only 233 hextets (1 byte in the QFT RAM is 16 bits) large, this allowed to increase the number of nested function calls, especially the nested calls of eval which evaluates the S-expressions. Since the S-expressions have a list structure, and eval becomes nested when lambdas are called in the Lisp program, this optimization was significant for allowing more sophisticated Lisp programs to be run in the architecture.

The QFTASM layer

The QFT assembly generated by the C compiler has a lot of room for optimization. I therefore created a compiler optimization tool to reduce the QFTASM assembly size.

Constant folding

Immediate constant expressions such as ADD 1 2 destination is folded to a MOV operation.

MOV folding

The QFT assembly code can be splitted into subregions by jump operations, such that:

  • Each subregion doesn’t contain any jump operations
  • Each subregion ends with a jump operation
  • Every jump operation in the assembly is guaranteed to jump to the beginning of a subregion, and never to the middle of any subregion

The last guarantee where jumps never occur in the middle of a subregion is provided by the C compiler. The ELVM assembly’s program counter is designed so that it increases only when a jump instruction appears. This makes an ELVM program counter to point to a sequence of multiple instructions, instead of a single instruction. Since the ELVM assembly uses the ELVM program counter for its jump instructions, it is guaranteed that the jump instructions in the QFT assembly never jump to the middle of any subregion, and always jumps to a beginning of a subregion.

In each subregion, the dependency graph for the memory address is created. If a memory address becomes written but is later overwritten without becoming used in that subregion at all, the instruction to write to that memory address is removed. Since it is guaranteed that jump operations never jump to the middle of any subregion, it is guaranteed that the overwritten values can be safely removed without affecting the outcome of the program. The MOV folding optimization makes use of this fact to remove unnecessary instructions.

This folding process is also done with dereferences; if a dereferenced memory address is written, and the address is overwritten without being used at all, and the dereference source is unoverwritten at all during this process, the instruction for writingto the dereferenced memory address is removed.

Jump folding

If the destination of a conditional or fixed-destination jump instruction points to another jump instruction with a fixed destination, the jump destination is folded to the latter jump instruction’s destination.

A similar folding is done when a fixed jump instruction points to a conditional jump instruction, where the fixed jump instruction is replaced by the latter conditional jump instruction.

The Varlife layer (the computer architecture)

Created the with a lookup table structure for the ROM module

In this image of the CPU and its surrounding modules, the two modules on the top are the ROM modules. The original ROM module had one table, with the memory address as the key and the instruction as the value. I recreated the ROM module to add a lookup table layer, where each distinct instruction (not the opcodes, but the entire instruction including the values used within) holds a distinct serial integer key. The ROM module on the right accepts a program counter address and returns the instruction key for the program counter. The module on the left accepts the instruction key and returns the actual bits of the instruction as the output. This allows for dictionary compression to be performed to the ROM data, saving a lot of space. Since the instructions are 45 bits and the instruction keys are only 10 bits, the instruction key table is 1/4 the size of the original ROM module. Although the ROM size is 3223 for the entire Lisp interpreter, there were only 616 distinct instructions in the Lisp interpreter, making the size of the instruction table be 616 ROM units high, effectively reducing the ROM module size altogether.

The ROM module features another form of compression, where absence of cells are used to represent 0-valued bits within the instruction. Below is a close-up look of the ROM value module:

The ROM value module

Notice that some cells on the left are absent, despite the table being expected to be a rectangular shape. This is because absent cells do not emit any signals, hence effectively emitting 0-valued bits as the output. To use this fact, all of the instructions are first alphabetically ordered at table creation time, so that instructions that start with trailing zeroes become located higher in the table (further from the signal source). This allows for a maximum number of cells to be replaced with absent units to represent 0-valued bits. In fact, the instruction for no-ops is represented as all zeroes, so all of the units in the value module are replaced by absent cells. The no-op instruction appears a lot of times immediately after the jump operation, due to the fact that the QFT architecture has a branch delay when invoking a jump instruction, requiring a no-op instruction to be present to compensate for the delay.

Added new optimized instructions to the ALU, and removed unused ones

I removed the AND , OR , SL (shift left), SRL (shift right logical), and the SRA (shift right arithmetical) opcodes, and added the SRU (shift right unit) and SRE (shift right eight) opcodes to the architecture. Since there already were opcodes for XOR (bitwise-xor) and ANT (bitwise-and-not), AND and OR , which were not used much in the interpreter, could be replaced by these opcodes. The bitshift operations had significantly larger patterns than the other opcodes, being more than 10 times larger than the other opcodes. These were reduced to a fixed-size shift operations which could be implemented in the same sizes as the other opcodes. Since the shift left opcode can be replaced by consecutively adding its own value, effectively multiplying by powers of 2, the opcode was safely removed. The main reason for the original bitshift units being large was due to the shift amounts being dependent on the values of the RAM. Converting a binary value to a physical (in-pattern) shift amount required a large pattern. On the other hand, shifting a fixed value could be implemented by a significantly more simpler pattern. The shift right eight instruction is mainly used for reading out the standard input, where each ASCII character in the input string is packed into one 16-bit RAM memory address.

This resulted in a total of exactly 8 opcodes, ANT , XOR , SRE , SRU , SUB , ADD , MLZ , and MNZ . Since this can fit in 3 bits, the opcode region for the instruction value was reduced by 1 bit. Since the RAM module is 10 bits, and the third value of the instruction is always the writing destination of the RAM, and the first instruction can also be made so that it becomes the reading source address of the RAM, this allows for an additional 6*2=12 bits to be reduced from the instruction length. These altogether has reduced the ROM word size from 58 to 45 bits, reducing nearly 1/4 of the original instruction size.

Extended the ROM and RAM address space from 9,7-bit to 12,10-bit

The original QFT architecture had a ROM and RAM address space of 9 and 7 bits. I extended the ROM and RAM address space to 12 and 10 bits, respectively. This was not a straightforward task as it first seemed, since the signal arrival timings between the modules had to be carefully adjusted in order for the signals to line up correctly. This involved reverse-engineering and experimenting undocumented VarLife pattern units used in the original QFT architecture. The same held for when redesigning other parts of the architecture.

Reducing the Standard Input Size

Since each byte of the RAM module can be ordered arbitrarily in the CPU’s architecture, the RAM is arranged so that the standard output is written at the very bottom of the RAM module, and proceeds upwards. Therefore, the contents of the RAM can easily be observed in a Game of Life viewer by directly examining the bottom of the RAM module.

Since RAM has 16 bits of memory per memory address, it allows to fit two ASCII-encoded characters per one address. Therefore, the standard input is read out by reading two characters per address. For the standard output, one character is written to one address for aesthetic reasons, so that the characters can be directly observed in a Game of Life viewer the pattern more easily. Also, for the standard output to proceed upwards within the RAM module pattern, the memory pointer for the standard output proceeds backwards in the memory space, while the pointer for the standard input proceeds forwards in the memory space.

The Game of Life layer

Optimizing the Game of Life layer mainly revolved around understanding the Macrocell format for representing and saving Game of Life patterns, and the Hashlife algorithm. The Macrocell format uses quadtrees and memoization for compressing repeated patterns. Since the final Game of Life pattern is an array of OTCA metapixels which are 2048x2048 large, and even has repeated patterns in the VarLife layer (meaning that there are repeated configurations of OTCA metapixels), this compression reduces the file size for the QFT pattern significantly. The best example that let me understand the Macrocell format was an example provided by Adam P. Goucher in this thread in Golly’s mailing list.

The Hashlife algorithm also uses quadtrees and memoization to speed up the Game of Life simulations. This algorithm makes use of the fact that the same pattern in a same time frame influences only a fixed extent of its surrounding regions, hence allowing for memoization.

As for optimization, I first noticed that the QFT pattern had a 1-pixel high pattern concatenated to the entire pattern. The original QFT pattern in the original QFT repository was carefully designed so that it is composed of 8x8-sized pattern units. Therefore, most of the patterns can be represented by 8x8 tiles. However, since the 1-pixel high pattern at the top creates an offset that shifts away the pattern from this 8x8 grid, it causes the pattern to have fewer repeated patterns if interpreted from the corner of its bounding box, causing the memoization to work inefficiently. I therefore tried putting a redundant cell (which does not interfere with the rest of the pattern) to realign the entire pattern to its 8x8 grid, which actually slightly reduced the resulting Macrocell file size from the original one. Although I didn’t compare the running times, since the Hashlife algorithm uses memoization over repeated patterns as well, I expect this optimization to at least slightly contribute to the performance of the simulation.

Another optimization was improving the metafier script used to convert VarLife patterns to Game of Life ( MetafierV3.py ). The original script used a square region to fit the entire pattern to create the quadtree representation. However, since the Lisp in Life VarLife pattern is 968 pixels wide but 42354 pixels high, it tried to allocate a 65536x65536-sized integer array, which was prohibitively large to run. I modified the script so that it uses a rectangular region, where absent regions of the quadtree are represented as absent cells. Although this is very straightforward with the knowledge of the Macrocell format, it was difficult at first until I became fond of the algorithms surrounding the Game of Life.

Memory Region Map and the Phases of Operation

The memory region map of Lisp in Life.

The memory region map is carefully designed to save space. This is best described with the operation phases of the interpreter.

Phase 0: Precalculations

Various precalculations are done after the interpreter starts running. The construction of the string interning hashtable for reserved atoms such as define , quote , etc. are done in this phase. For the GCC-compiled interpreter, some variables that are defined in the QFT memory header are defined in the C source.

Since the outcome of these precalculations are always the same for any incoming Lisp program, this phase is done on the host PC, and the results are saved as ramdump.csv during the QFTASM compile time. The results are then pre-loaded into the RAM when the VarLife and Game of Life patterns are created. This allows to saves some CPU cycles when running the interpreter.

As explained earlier, the QFT architecture holds register values in the RAM. There are 11 registers, which are placed in the addresses from 0 to 10.

The reserved values in the image include strings such as reserved atoms and the destinations of the jump hashtable used for evaluation. The rest of the region is used for storing global variables in the interpreter’s C source code.

Phase 1: Parsing

The Lisp program provided from the standard input is parsed into S-expressions, which is written into the heap region.

Notice that the string interning hashtables are created in the later end of the stack region. This is because these hashtables are only used during the parsing phase, and can be overwritten during the evaluation phase. For most Lisp programs including the ones in this repository, the stack region does not grow far enough to overwrite these values. This allows to place 3 growing memory regions during the parsing phase, the stack region used for nested S-expressions, the heap region which stores the parsed S-expressions, and the string interning hashtables when new strings are detected within the Lisp program. Newly detected strings such as variable names in the Lisp program are also written into the heap region.

The heap region is also designed so that it overwrites the standard input as it parses the program. Since older parts of the program can be discarded once it is parsed, this allows to naturally free the standard input region which save a lot of space after parsing. The standard input also gets overwritten by the Standard output if the output is long enough. However, due to this design, long programs may have trouble at parsing, since the input may be overwritten too far and get deleted before it is parsed. A workaround for this is to use indentation which places the program further ahead into the memory, which will prevent the program from being overwritten from the growing heap region. For all of the programs included in this repository, this is not an issue and the programs become successfully parsed.

Phase 2: Evaluation

By this time, all of the contents of the stack region and what is ahead of the head of the heap region can be overwritten in the further steps. Note that a similar issue with the standard input happens with the standard output - when too many Lisp objects are created during runtime, it may overwrite the existing standard output, or may simply exceed the heap region and proceed into the stack region. Since the heap region is connected to the later end of the stack region, this may be safe if the standard output is carefully handled, but the interpreter will eventually start overwriting values of the stack region if the heap continues to grow.

Miscellaneous

How can a 2-state OTCA Metapixel emulate the behavior of an 8-state VarLife pattern?

This is one of the most interesting ideas in the original QFT project to make the QFT architecture possible. As explained in the original QFT post , the 8 states of VarLife are actually a mixture of 4 different birth/survival rules with binary states. This means that each VarLife cell can only transition between two fixed states, and the birth/survival rule for that cell does not change at any point in time. Moreover, the OTCA Metapixel is designed so that each metapixel can carry its own birth/survival rules. Therefore, each VarLife cell can be enoded into an OTCA Metapixel by specifying its birth/survival rule and the binary state. This means that the array of OTCA Metapixels in the metafied pattern is actually a mixture of metapixels with different birth/survival rules, arranged in a way so that it makes the computation possible.

Halting Time

After the program counter is set to 65535 and the program exits, no more ROM and RAM I/O signals become apparent in the entire module. This makes the VarLife pattern becomes completely stationary, where every pattern henceforth becomes completely identical. Defining this as the halting time for the calculation, the pattern for print.lisp halts at exactly 105,413,068 VarLife generations.

The halting time for the Game of Life patterns are defined similarly for the meta-states of the OTCA Metapixels. Since OTCA Metapixels never become stationary, the Game of Life states do not become stationary after the halting time, but the meta-states of the OTCA Metapixels will become stationary after the halting time.

For the VarLife pattern of print.lisp , by generation 105,387,540, the value 65535 gets written to the program counter. At generation 105,413,067, the last signal becomes just one step from disappearing, and at generation 105,413,068 and onwards, the pattern becomes completely stationary and every pattern becomes identical to each other. In the Game of Life version, since the OTCA Metapixel continues running indefinitely, the pattern does not become completly stationary, but the meta-states of the OTCA Metapixels will become completely stationary, since it is an emulation of the VarLife pattern. Note that the halting times for programs other than print.lisp is just a sufficient number of generations, and not the exact values.

The required number of generations per CPU cycle depends on many factors such as the ROM and RAM addresses and the types of opcodes, since the arriving times of the I/O signals depend on factors such as these as well. This makes the number of generations required for the program to halt become different between each program. For example, print.lisp has a rate of 23822.16 generations per CPU cycle (GpC), but z-combinator.lisp has a rate of 28870.81 GpC, and primes-print.lisp has 31502.43 GpC. 23822.16 GpC is in fact insufficient for z-combinator.lisp to finish running, and 28870.81 is also insufficient for primes-print.lisp to finish running.

Miscellaneous Screenshots

A close-up view of a part of the ROM module in the Conway's Game of Life version.

The ALU unit in the CPU. From the left are the modules for the ANT , XOR , SRE , SRU , SUB , ADD , MLZ , and the MNZ opcodes.

The SRE and the SRU opcodes were newly added for this project.

Credits

The CPU architecture used in this project was originally created by the members of the Quest For Tetris (QFT) project, and was later optimized and modified by Hikaru Ikuta for the Lisp in Life project. The VarLife cellular automaton rule was also defined by the members of the QFT project. The metafier for converting VarLife patterns to Conway’s Game of Life patterns was written by the members of the QFT project, and was later modified by Hikaru Ikuta to support the pattern size of the Lisp in Life architecture. The assembly language for the QFT architecture, QFTASM, was also originally designed by the members of the QFT project, and was later modified by Hikaru Ikuta for this project for achieving a feasible running time. The Lisp interpreter was written by Hikaru Ikuta. The compilation of the interpreter’s C source code to the ELVM assembly is done using an extended version of 8cc written by Rui Ueyama from Google. The compilation from the ELVM assembly to QFTASM is done by an extended version of ELVM (the Esoteric Language Virtual Machine), a project by Shinichiro Hamaji from Preferred Networks, Inc. The Game of Life backend for ELVM was written by Hikaru Ikuta, and was later further extended by Hikaru for the Lisp in Life project.

‘Charismatic, self-assured, formidable’: Lara Croft returns with two new Tomb Raider games

Guardian
www.theguardian.com
2025-12-12 02:45:31
An all-new Croft adventure, Tomb Raider Catalyst, will be released in 2027 – and a remake of the action heroine’s first adventure arrives next year After a long break for Lara Croft, a couple of fresh Tomb Raider adventures are on their way. They will be the first new games in the series since 2018,...
Original Article

After a long break for Lara Croft, a couple of fresh Tomb Raider adventures are on their way. They will be the first new games in the series since 2018, and both will be published by Amazon.

Announced at the Game Awards in LA, Tomb Raider Catalyst stars the “charismatic, self-assured, formidable Lara Croft” from the original 1990s games, says game director Will Kerslake. It’s set in the markets, mountains, and naturally the ancient buildings of northern India, where Lara is racing with other treasure hunters to track down potentially cataclysmic artefacts. It will be out in 2027.

Catalyst is in development at Crystal Dynamics, the Canadian developer that has looked after Tomb Raider since 2003. Crystal Dynamics previously developed Tomb Raiders Legend, Anniversary and Underworld, as well as the reboot trilogy that featured a younger, more vulnerable Lara. Its most recent game in the series was 2018’s Shadow of the Tomb Raider.

Tomb Raider: Legacy of Atlantis, meanwhile, is being developed in collaboration with Flying Wild Hog in Poland. Featuring modernised combat and redesigned tomb-delving puzzles, the game is an “expanded” ground-up reimagining of Lara Croft’s very first 1996 adventure, which established her as one of gaming’s most recognisable characters. “Our goal was to respect the spirit of Core Design’s original game, while updating the experience for today’s gamers including building it with Unreal Engine 5,” Kerslake says. “We see this is a reimagining of the original game with a gameplay experience that wasn’t possible on the technology at the time.” Legacy of Atlantis will be out in 2026.

In both games, Croft will be played by the British actor Alix Wilton Regan, who previously had significant roles in Dragon Age: Inquisition and Cyberpunk 2077. She was also due to star as Joanna Dark in the cancelled remake of 2000’s spy-shooter Perfect Dark.

Despite the series’ recent hiatus, Tomb Raider still has immense brand recognition, built on its 1990s heyday. Amazon also has a TV series in the works with Phoebe Waller-Bridge, which is to star Sophie Turner as Croft. Over 30 years, Croft has been everything from Playboy pin-up to rugged wilderness adventurer to gentlewoman action hero.

“Across the series, our goal is to show that Lara is always evolving,” Kerslake says. “Her core DNA will remain, but we believe it’s important to show how each adventure shapes her character over time.”

Tidbits-Dec. 11- Reader Comments: Trump’s Deportations; Starbucks Strike; Rural America Myths; Trump’s MRI Results; New Resource – Zohran and Municipal Socialism; Amazon Labor Union; Online Forum-Building Working Class Solidarity With Ukraine; More..

Portside
portside.org
2025-12-12 02:44:56
Tidbits-Dec. 11- Reader Comments: Trump’s Deportations; Starbucks Strike; Rural America Myths; Trump’s MRI Results; New Resource – Zohran and Municipal Socialism; Amazon Labor Union; Online Forum-Building Working Class Solidarity With Ukraine; More.. jay Thu, 12/11/2025 - 21:44 ...
Original Article

Resources:

Announcements:

=

.

Deportation  --  Cartoon by Clay Bennett




Clay Bennett
December 9, 2025
Chattanooga Times Free Press

End Family Separation  --  Cartoon and Commentary by Lalo Alcaraz


We are all heartbroken when we hear about more and more families being separated by ICE and this rogue Administration. This country has to abide by its claims that it is pro-family, and stop ripping our families apart.

Visit the LALO ON CALÓ Archive

Lalo Alcaraz
December 9, 2025
CALÓ NEWS

Re: The Emancipation Proclamation Offers a Hint on What the Supreme Court Will Do About Birthright Citizenship

Very relevant to the Pedo President’s wanting to end birth right citizenship.

Bill Audette
Posted on Portside's Facebook page

Re: Maybe a General Strike Isn’t So Impossible Now

(posting on Portside Labor )

Very good article on general strike potential.

Let us not forget the massive sliding strikes of (mainly) CIO unions in late 1945 - early 1946. Now we'd want to include a wider range of organizations, of course, some of whom could be knockin' doors and speaking at group meetings, etc., while awaiting their group's scheduled turn. Much thought and preparation needed within a somewhat loose leadership.

Just sayin'

Solidarity,

Jim Young
President emeritus
PA Labor History Society
SEIU ret; APSCUF/AFT ret; National Writers Union

Re: Starbucks Workers Are Still Without a Labor Deal Four Years After Their First Union Win. Here's Why

(posting on Portside Labor )

I will not patronize a Starbucks store until and unless a contract is signed:  perhaps there are others like me who patronize union shops whenever possible - and there are a lot of other "posibles" out there.

Carolyn M Birden
Member, WBAI Local Station Board




Rob Rogers
December 5, 2025
TinyView

Re: Six Myths About Rural America: How Conventional Wisdom Gets It Wrong

FACT:  when the media talks about "America's farmers" and "family farmers", it's bull. As of the 1990 Census, "family farmer" was "no longer a recognized occupation" because it was under 1.5% of the population. Those "farmers" are agribusiness. What's left are what are referred to as "hobby farms" (like the one a couple of friends of mine have in se Indiana). They *all* work outside the farm.

I posted this elsewhere, and one response noted that what happens on top of that is that the kids go to cities for more education; they may come back, but 70% leave permanently. This, along with shuttering of local hospitals and manufacturing means that what's happening is the depopulation of rural America.

mark


Mike Stanfill
December 10, 2025
Raging Pencils




We will never truly know what Donald Trump's MRI results revealed because we can't trust anything that comes out of this White House.

Trump's doctor said that the orange one had an MRI imaging of his heart and abdomen in October as part of a preventative screening for men his age. The physical exam included “advanced imaging,” which is “standard for an executive physical” in Trump’s age group. The doctor concluded that the cardiovascular and abdominal imaging was “perfectly normal.”

I call bullshit. There is there is nothing that comes out of this administration that is “perfectly normal.” And you would need advanced imaging to find his heart. You would probably have to put him right in front of the Hubble Telescope to find his brain.

Trump said, “It was just an MRI. What part of the body? It wasn’t the brain because I took a cognitive test and I aced it.” Yeah, it doesn't work like that.

Doctors typically order an MRI to help with diagnosing symptoms or to monitor an ongoing health problem. So-called “preventive” cardiac and abdominal MRIs are not part of routine screening recommendations. This so-called “executive” physical means it was special treatment for Donald Trump. Maybe Snoop Dogg sold him some extended body parts insurance.

You have never had an MRI as some sort of bonus or something extra on an exam. Do you wanna know why? Because an MRI is expensive, and your insurance company is not going to cover it just because the doctor wants to hunt for stuff that isn't giving any signs. Only very wealthy people have MRIs for shits and giggles. This is another example of Donald Trump being out of touch with people being hurt by his inflation and careening economy. I believe they were hunting for something, but they're not gonna tell us what that was.

During Trump's first term, there was a mysterious visit to the hospital, which was explained as a physical being broken up into two parts, which is a lie. You can't trust doctors who claim that Donald Trump only weighs 215 pounds. You can't trust doctors who claim that Donald Trump passed a cognitive test.

This White House cannot explain the weird bruises on Donald Trump's hands that he keeps putting makeup over. You can't trust this White House when they keep telling us that Trump is full of energy while he keeps falling asleep in public.

I don't wish bad health on anyone, even a douchenozzle like Donald Trump. I did not do that previously, but I am especially not going to do it after the experience of suffering a stroke. I'm not going to make any predictions about what will happen to Donald Trump healthwise, because I think it’s bad karma that can bounce back onto you. I'm also not going to engage in any conspiracy theories about his health. But it's obvious this White House is hiding something about Donald Trump's health.


Clay Jones
December 6, 2025
Claytoonz

Subscribe today to make sure you get your copy direct from the printer.


Get four print issues for $20


Zohran Mamdani’s victory is a breakthrough moment for the socialist movement in the United States. It still feels unbelievable: America’s largest city is about to be led by a young socialist, a Jacobin subscriber, and a committed member of Democratic Socialists of America. The Left will be occupying the municipal executive office in one of the world’s biggest cities.

But the contradictions we face are sharp, and the limits of executive office are real: hostile state government; pressure from capital; the political and economic challenges of raising revenue to fund essential programs.

These are not new problems. They are the same challenges faced by progressive and socialist parties here and abroad when they first entered municipal office.

In this issue, we examine the history of municipal socialism, from “Red Vienna” to French and Italian Communist mayors after World War II to Milwaukee’s sewer socialists, La Guardia’s New Deal metropolis, and Ken Livingstone in London. How has the Left used local power to expand housing, welfare, and cultural institutions? And why did they so often run aground on fiscal constraints and central state hostility?

The socialist left often won legitimacy in office, but also faced demobilization or the risk of being reduced to managers of austerity. The task for this issue is to draw out important lessons while staying optimistic about this truly transformational moment in power.

Zohran’s election is the most exciting moment for American socialists in generations. But to take advantage of it, we must clarify what municipal power can realistically achieve under capitalism and how it might serve as a stepping stone toward building working-class institutions and winning state and national breakthroughs both in New York and beyond.

This is a must-read issue for every socialist. Subscribe today to make sure you get a copy direct from the printer, plus three more editions.


Subscribe for just $20


If you prefer to subscribe by check, you can do so by mailing it to us at Jacobin Foundation, 388 Atlantic Avenue, Brooklyn, NY 11217. Just be sure to include your email and mailing address.


Tuesday, December 16

9:30am – 11:00am

Free and open to all. Breakfast will be served.

In-person-only:

CUNY School of Labor and Urban Studies

25 West 43rd Street, 18th floor, New York, NY 10036 ( map )

Click here to register.

Please register to receive event info and reminders.

( slucuny.swoogo.com/16December2025 )

Guest Speaker:

Sultana Hossain - Recording Secretary, Amazon Labor Union - IBT Local 1 (ALU)

Moderator:

Sarah Watson - Director, Murphy Institute, CUNY School of Labor and Urban Studies (SLU)

Join us to learn from Sultana Hossain, a key leader in the Amazon Labor Union - IBT Local 1 (ALU) , in conversation with Sarah Watson, director of the Murphy Institute at the CUNY School of Labor and Urban Studies.

What is ALU fighting for? How have ALU's major wins–from health & safety improvements to reversing illegal firings–helped to grow the union? What were the immediate and far-reaching impacts of the 2024 strike? How is ALU adapting its organizing in response to Amazon's retaliatory attacks? What are the major lessons to take away from ALU's groundbreaking fight thus far?

Join us for breakfast and a lively audience discussion!


Click here to register.

Please register to receive event info and reminders.

( slucuny.swoogo.com/16December2025 )



Wednesday, December 17, 2025 --  3:00 PM - 4:30 PM EST

Online, YouTube

Register Now


The Trump Administration and Vladimir Putin’s regime want to impose an imperialist peace on Ukraine, a partition of its sovereign territory that rewards Russian colonial aggression. Against all odds, Ukraine’s working class has sustained the popular and military struggle for self-determination, sacrificing life and limb and putting extreme strain on its healthcare system, alleviated to an extent by the heroic work of harm reduction activists in limiting the spread of infectious diseases.

Nurses and healthcare workers have had to face excruciating working conditions and long hours. To address this emergency, the Ukraine Solidarity Network has launched a campaign to raise funds for the independent Ukrainian nurses’ union, Be Like We Are, to pay for essential life-saving equipment for hospitals. Now is the time for solidarity and material support for Ukraine.

Fund Drive for Be Like We Are : https://www.gofundme.com/f/support-krylas-lifechanging-mission

***Register through Ticket Tailor to receive a link to the live-streamed video on the day of the event. This event will also be recorded and captioning will be provided.***

Speakers:

Denys Bondar is a native of Ukraine and an associate professor of physics at Tulane University as well as a member of the Ukraine Solidarity Network and Physicists Coalition for Nuclear Threat Reduction.

Oksana Slobodyna has been working for 28 years a nurse in a children’s hospital. Since 2022, she have led the Medical Movement “Be Like Us,” as well as the Lviv Regional Trade Union of Healthcare Workers. The mother of two adopted sons and two daughters. Ukraine is the most important thing, and its people are the most precious we have.

Pavlo (Pasha) Smyrnov is the Deputy Executive Director of the Ukrainian Alliance for Public Health and has led innovative nationwide HIV and public health programs as well as mobile clinics to provide health services in frontline and hard-to-reach areas during the war.

Tricia Ryshkus , RN, is a pediatric union nurse in Minneapolis, MN and a leader in her in union, the Minnesota Nurses Association.

This event is sponsored by Haymarket Books and Ukraine Solidarity Network . While all of our events are freely available, we ask that those who are able make a solidarity donation in support of our important publishing and programming work.

Orvalho Spec

Lobsters
github.com
2025-12-12 01:48:31
I have this idea for some time now and today I decided to write it down the stuff I was analysing on. Right now I am a little low in time to work on that. I am actually a little frozen on how to approach this project. I posted here so I can gather some ideas to refine this idea or maybe inspire some...
Original Article

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Appearance settings

Cadmium Zinc Telluride: The wonder material powering a medical 'revolution'

Hacker News
www.bbc.com
2025-12-12 01:41:15
Comments...
Original Article

Chris Baraniuk Technology Reporter

Kromek Amber-coloured cadmium zinc telluride in furnace. Kromek

Very few organisations can supply cadmium zinc telluride

Lying on your back in a big hospital scanner, as still as you can, with your arms above your head – for 45 minutes. It doesn't sound much fun.

That's what patients at Royal Brompton Hospital in London had to do during certain lung scans, until the hospital installed a new device last year that cut these examinations down to just 15 minutes.

It is partly thanks to image processing technology in the scanner but also a special material called cadmium zinc telluride (CZT), which allows the machine to produce highly detailed, 3D images of patients' lungs.

"You get beautiful pictures from this scanner," says Dr Kshama Wechalekar, head of nuclear medicine and PET. "It's an amazing feat of engineering and physics."

The CZT in the machine, which was installed at the hospital last August, was made by Kromek – a British company. Kromek is one of just a few firms in the world that can make CZT. You may never have heard of the stuff but, in Dr Wechalekar's words, it is enabling a "revolution" in medical imaging.

This wonder material has many other uses, such as in X-ray telescopes, radiation detectors and airport security scanners. And it is increasingly sought-after.

Investigations of patients' lungs performed by Dr Wechalekar and her colleagues involve looking for the presence of many tiny blood clots in people with long Covid, or a larger clot known as a pulmonary embolism, for example.

The £1m scanner works by detecting gamma rays emitted by a radioactive substance that is injected into patients' bodies.

But the scanner's sensitivity means less of this substance is needed than before: "We can reduce doses about 30%," says Dr Wechalekar. While CZT-based scanners are not new in general, large, whole-body scanners such as this one are a relatively recent innovation.

Guy's and St Thomas' NHS Foundation Trust Wearing a white jacket, Dr Kshama Wechalekar stands alongside a hospital scanner Guy's and St Thomas' NHS Foundation Trust

Dr Kshama Wechalekar with the latest scanner at London's Royal Brompton Hospital

CZT itself has been around for decades but it is notoriously difficult to manufacture. "It has taken a long time for it to develop into an industrial-scale production process," says Arnab Basu, founding chief executive of Kromek.

In the company's facility at Sedgefield, there are 170 small furnaces in a room that Dr Basu describes as looking "like a server farm".

A special powder is heated up in these furnaces, turned molten, and then solidified into a single-crystal structure. The whole process takes weeks. "Atom by atom, the crystals are rearranged […] so they become all aligned," says Dr Basu.

The newly formed CZT, a semiconductor, can detect tiny photon particles in X-rays and gamma rays with incredible precision – like a highly specialised version of the light-sensing, silicon-based image sensor in your smartphone camera.

Whenever a high energy photon strikes the CZT, it mobilises an electron and this electrical signal can be used to make an image. Earlier scanner technology used a two-step process, which was not as precise.

"It's digital," says Dr Basu. "It's a single conversion step. It retains all the important information such as timing, the energy of the X-ray that is hitting the CZT detector – you can create colour, or spectroscopic images."

He adds that CZT-based scanners are currently in use for explosives detection at UK airports, and for scanning checked baggage in some US airports. "We expect CZT to come into the hand luggage segment over the next [few] years."

Kromek A technician wearing blue gloves adjusts one of a line of furnaces. Kromek

Special furnaces are needed to make CZT

But it's not always easy to get your hands on CZT.

Henric Krawczynski at Washington University in St Louis in the US has used the material before on space telescopes attached to high altitude balloons. These detectors can pick up X-rays emitted by both neutron stars and plasma around black holes.

Prof Krawczynski wants very thin, 0.8mm pieces of CZT for his telescopes because this helps to reduce the amount of background radiation they pick up, allowing for a clearer signal. "We'd like to buy 17 new detectors," he says. "It's really difficult to get these thin ones."

He was unable to source the CZT from Kromek. Dr Basu says his firm has high demand at the moment. "We support many, many research organisations," he adds, "It's very difficult for us to do a hundred different things. Each research [project] needs a very particular type of detector structure."

For Prof Krawczynski, it's not a crisis – he says he might use either CZT that he has from previous research, or cadmium telluride, an alternative, for his next mission.

However, there are bigger headaches at the moment. That upcoming mission was due to fly from Antarctica in December but "all the dates are in flux", says Prof Krawczynski, because of the US government shutdown .

Diamond Light Source A technician adjusts equipment at Diamond Light Source Diamond Light Source

CZT will be used in an upgrade of Diamond Light Source

Many other scientists use CZT. In the UK, a major upgrade of the Diamond Light Source research facility in Oxfordshire – costing half a billion pounds – will improve its capabilities thanks to the installation of CZT-based detectors.

Diamond Light Source is a synchrotron, which fires electrons around a giant ring at nearly the speed of light. Magnets cause these whizzing electrons to lose some energy in the form of X-rays, and these are directed off from the ring in beamlines so that they may be used to analyse materials, for example.

Some recent experiments have involved probing impurities in aluminium while it melts. Understanding those impurities better could help improve recycled forms of the metal.

With Diamond Light Source's upgrade, due to complete in 2030, the X-rays produced will be significantly brighter, meaning that existing sensors would not be able to detect them properly.

"There's no point in spending all this money in upgrading these facilities if you can't detect the light they produce," says Matt Veale, group leader for detector development at the Science and Technology Facilities Council, which is the majority owner of Diamond Light Source.

That's why, here too, CZT is the material of choice.

More Technology of Business

The Boot Order of the Raspberry Pi Is Unusual

Hacker News
patrickmccanna.net
2025-12-12 01:28:43
Comments...
Original Article

I discovered that the Raspberry Pi doesn’t boot the same way traditional PC’s do. This was interesting and I thought I’d share.

At a high level, Raspberry Pi booting is firmware-driven , not BIOS-driven like a PC. On Raspberry Pi, the GPU (VideoCore) is powered first and is the root of trust for booting. The ARM CPU is not the initial execution environment. This is a deliberate architectural choice dating back to the original Pi.

Boot sequence (simplified):

1. Power applied

  • Power management IC brings up rails\
  • VideoCore GPU comes up first
  • ARM CPU is held in reset

2. VideoCore ROM Executes (GPU Side)

  • Immutable GPU boot ROM runs
  • This code:
    • Initializes minimal SDRAM
    • Reads boot configuration
    • Locates next-stage bootloader

The ARM cores are still powered down.

3. GPU Loads Firmware

  • GPU reads EEPROM bootloader
  • EEPROM bootloader then loads firmware from SD / USB / Network

The loaded firmware files are GPU Binaries- not ARM code!

  • start*.elf
  • fixup*.dat

4. GPU Configures the System

The GPU:

  • Parses config.txt
  • Applies device tree overlays
  • Allocates memory split (GPU vs ARM)
  • Initializes clocks and peripherals
  • Loads the ARM kernel image into RAM

At this point, the system hardware layout is defined by the GPU , not the CPU.

5. GPU Releases the ARM CPU from Reset

Only after:

  • Firmware is loaded
  • Memory is mapped
  • Kernel is staged

…does the GPU release the ARM core(s) and set their entry point.

This is when the CPU first executes instructions .

6. ARM CPU Starts Linux

  • CPU jumps directly into:
    • kernel7.img / kernel8.img
  • Linux takes over
  • GPU becomes a peripheral (mailbox, display, VPU, etc.)

This explains several Raspberry Pi oddities:

  • The Raspberry Pi has No BIOS / UEFI
  • The config.txt is not a Linux File
  • Kernel Replacement Is Trivial
  • Boot failures before Linux is loaded are invisible to Linux

Even with the EEPROM bootloader:

  • The GPU still executes first
  • The EEPROM code is executed by the GPU
  • ARM remains gated until kernel handoff

EEPROM just replaces bootcode.bin ; it does not change authority.

The trust chain for the pi is:

GPU ROM → GPU firmware → ARM kernel → Linux userspace

The trust chain choices have consequences!

  • ARM cannot verify GPU firmware
  • Secure boot (where enabled) is GPU-anchored
  • This is why Raspberry Pi secure boot is not comparable to PC Secure Boot

The Raspberry Pi Secure boot implementation ensures that:

  • Only cryptographically signed boot firmware and kernel images are executed
  • The chain of trust starts in the VideoCore GPU , not the ARM CPU
  • The system can be locked to a specific vendor or deployment

It does not :

  • Provide a hardware-enforced user/kernel trust boundary
  • Protect against a malicious or compromised GPU firmware
  • Provide measured boot or TPM-style attestation
  • Prevent runtime compromise of Linux

Here’s the order of operations for boot up on a traditional PC:

Traditional PC Boot:

  ┌─────────────┐
  │    BIOS     │
  │   (CPU)     │
  └──────┬──────┘
         ↓
  ┌─────────────┐
  │  Bootloader │
  │   (CPU)     │
  └──────┬──────┘
         ↓
  ┌─────────────┐
  │   Kernel    │
  │   (CPU)     │
  └─────────────┘

The firmware embedded in the motherboard powers up the CPU. The CPU loads the bootloader. The bootloader hopefully correctly does cryptographic operations to load an unmodified kernel. From here, the boot process continues with init/systemd and our services are brought online for a running system.

The pi’s totally different. Instead of starting with the CPU, we start with the GPU.

┌─────────────┐
│  VideoCore  │ ← GPU boots FIRST
│    (GPU)    │
└──────┬──────┘
       ↓
┌─────────────┐
│ Loads ARM   │
│   kernel    │
└──────┬──────┘
       ↓
┌─────────────┐
│  ARM CPU    │ ← CPU starts LAST
│   wakes up  │
└─────────────┘

Why? The Raspberry Pi uses Broadcom BCM2xxx chips where The “main” processor is a VideoCore IV/VI GPU is activated at power-on. It runs proprietary firmware that handles the boot. The BCM2xxx chips are typically used in set-top boxes for video streaming/entertainment. For these types of devices, the goal is to quickly get to a flashy user interface. The Raspberry Pi Foundation chose these inexpensive chips as their base that leave them with an odd boot order.

Google De-Indexed My Bear Blog and I Don't Know Why

Hacker News
journal.james-zhan.com
2025-12-12 01:20:05
Comments...
Original Article

Preamble: The whole affair is Google’s fault and not Bear Blog’s. Huge thanks to Herman —Bear Blog’s founder and dev—for his patience and help.


A month after I started my first Bear blog at blog.james-zhan.com , my blog was entirely de-indexed by Google for no apparent reason:

Screenshot of Google search

I have since migrated to journal.james-zhan.com (you are on it right now) and redirected all links from blog.james-zhan.com accordingly, but to this day, I don’t understand what happened, and so I’m putting this post out there to see if perhaps anyone could shed some light—you are welcome to email me or leave a comment at the bottom of the post.

Let me backtrack and show you how it all went down.


Table of Contents

At first, all was well
Where it started to go wrong
Was that a coincidence or the cause of the issue?
It got worse
Extensive troubleshooting
Suspect #1: Something’s up with the domain
Suspect #2: Quality of blog content
Suspect #3: Lack of internal linking
Other suspects, eliminated with Herman’s help (thank you Herman!)
My blog was properly indexed by other search engines
What I ended up doing


At first, all was well

My blog went live on Oct 4 and I published a lengthy, well-research opinion piece commenting on a recent event.

Because of that, I wanted the article to show up on Google ASAP so that when people searched about the event, maybe my article would come up. I knew that it could take a while for Google to naturally crawl and index a new site, so to accelerate the process, I went on Google Search Console (GSC), submitted the sitemap and requested indexing on my article.

And it worked—the next day, my blog and articles were indexed and showed up on Google if you put in the right search terms.

In GSC, you can even see that I was getting some impressions and clicks at the time from the exact topic that my opinion piece was about. Great!

GSC analytics

From then on, every time I published a new post, I would go to GSC and request indexing for the post URL, and my post would be on Google search results shortly after, as expected.

Where it started to go wrong

On Oct 14, as I was digging around GSC, I noticed that it was telling me that one of the URLs weren’t indexed. I thought that was weird, and not being very familiar with GSC, I went ahead and clicked the “Validate” button.

Only after did I realized that URL was the RSS feed subscribe link, https://blog.james-zhan.com/feed/?type=rss , which wasn’t even a page so it made sense that it hadn’t been indexed, but it was too late and there was no way for me to stop the validation.

I received an email from GSC telling me it was validating that 1 page with indexing issues:

Email from GSC

Four days later, on Oct 20, I received an email from GSC saying “Some fixes failed for Page indexing issues on site https://blog.james-zhan.com/” and when I searched “site:blog.james-zhan.com,” I saw that all but one of my blog posts had been de-indexed:

GSC

All of them showed the same reason:

“Page is not indexed: Crawled – currently not indexed”

GSC

Confused, I poked around GSC to see if it showed me why, and I couldn’t find anything useful, so I resubmitted the sitemap for good measure, and clicked “Validate” again.

I even requested indexing for all the individual blog post URLs and that didn’t do anything.

As of the publishing of this post, the validation status is still “Started” (it’s been nearly 20 days).

Was that a coincidence or the cause of the issue?

As I was troubleshooting, I noticed that the day that I initiated the validation for the first time (Oct 14) was the same day that all but one of my blog posts got de-indexed:

GSC

Did my accidental attempt to make GSC index https://blog.james-zhan.com/feed/?type=rss cause some kind of glitch, thereby de-indexing the rest of the blog?

I don’t get why it would, but it’s weird that the two events happened on the same day.

It got worse

While this was going on, I continued to post a few articles, and you can see that all the new posts faced the “Page is not indexed: Crawled – currently not indexed” error:

GSC

And then on Nov 3, I discovered that the remaining, single blog post that had been indexed just got de-indexed as well:

GSC

So basically, no one could find my blog on Google.

Extensive troubleshooting

I’m not a web dev or programmer, but I tried my best to cover as much ground as possible in my troubleshooting to narrow down the cause.

Suspect #1: Something’s up with the domain

The root domain, james-zhan.com , was from GoDaddy. I’ve had this domain for many years and I’ve used it on different sites and never had an issue with Google’s indexing.

For example, just this year, I created a new subdomain with it and that’s been indexed by Google.

I also don’t touch any advanced configuration with DNS records or what have you—I don’t have knowledge in that stuff, so it’s unlikely I somehow screwed up something in GoDaddy.

But just to be sure it wasn’t some wonky thing going on specifically with the Bear blog + GoDaddy combo, I created another Bear blog with the subdomain www.james-zhan.com .

This one shows up on Google no problem.

Conclusion: Domain wasn’t the cause.

Suspect #2: Quality of blog content

Whenever people discuss the indexing of their website in online forums, they always talk about the quality of the content being a huge factor. They say that your site isn’t indexed or isn’t ranked highly because your site doesn’t have much content, your content is low effort, or something like that.

First, I’m not worried about ranking—I just want my blog to be properly indexed.

Second, the issue couldn’t be the quality or the quantity of the content. I came across some other pretty barebones Bear blogs that don’t have much content, and looked them up on Google, and they showed up in the results just fine.

An example: Phong’s blog . It’s a very minimalist blog with only 6 posts (of great quality) and it shows up on Google search.

CS_Safari_2025-10-21_01

Conclusion: Quality or quantity of content wasn’t the cause.

Suspect #3: Lack of internal linking

I read about how the structure of a site can play a role in Google’s indexing.

Some say that if your blog posts’ URLs are all “orphaned,” like:

  • domain.com/post-title-1
  • domain.com/post-title-2

…instead of:

  • domain.com/blog/post-title-1
  • domain.com/blog/post-title-2

Allegedly, that might cause Google to not index your posts. By default, when you publish a post on Bear Blog, the blog post’s path isn’t preceded by “blog/.”

So I went around and checked the post URLs of other Bear blogs and saw that none of them had “/blog/” in them, and those blogs were indexed just fine. I also highly doubt it’s a real issue; otherwise, it wouldn’t be the default behaviour on Bear Blog.

Conclusion: Lack of internal linking wasn’t the cause.

Other suspects, eliminated with Herman’s help (thank you Herman!)

I reached out to Herman with all the details and asked him for help. Of course, he responded promptly and helped me troubleshoot to identify the cause.

He was able to confirm and the following:

  • GoDaddy and DNS weren’t the cause
  • My bear blog had nothing that would prevent Google from indexing
  • HTML/CSS doesn’t affect SEO/indexing
    • I had the following CSS code to put the tags above the blog post title, but Herman said this was fine
        /* --- Move tags above the title --- */
    main {
    display: flex;
    flex-direction: column;
    }
    
    /* Style and reposition the tags */
    main > p.tags {
    order: -1;                /* Moves tags above the title */
    margin: 0 0 0.6rem 0;
    font-size: 0.9em;
    letter-spacing: 0.02em;
    color: var(--heading-color);
    opacity: 0.8;
    }
    
    /* Keep the title below tags */
    main > h1 {
    order: 0;
    }
    

I just wanted to take a moment to express my gratitude to Herman for investigating this with me. My emails to him were pretty elaborate with troubleshooting steps I had taken along with many screenshots. He took the time to fully understand the whole issue and even triple-checked my site to make sure everything was sound.

It was a refreshing tech support experience, and made me love Bear Blog as a platform just that much more.

My blog was properly indexed by other search engines

I don’t even have to use “site:”—just by searching “James Zhan blog,” both my blog and my www.james-zhan.com site show up in other search engines:

DuckDuckGo:

DuckDuckGo results

Bing:

Bing results

Brave:

Brave results

So there’s definitely nothing wrong on a technical level with my blog that would prevent Google from indexing it.

What I ended up doing

I copied my blog over to a different subdomain (you are on it right now), moved my domain from GoDaddy to Porkbun for URL forwarding, and set up URL forwarding with paths so any blog post URLs I posted online will automatically be redirected to the corresponding blog post on this new blog.

I also avoided submitting the sitemap of the new blog to GSC. I’m just gonna let Google naturally index the blog this time. Hopefully, this new blog won’t run into the same issue.

At this point, I’m no longer trying to resolve the issue, but just out of curiosity, I do want to know what the hell happened there. I’d had a previous site on GSC to track traffic for many years and never had such an issue.

If any of you have any guesses, I’d love to hear them ( email me or leave a comment below)!

Previous | Next


Subscribe to my blog via email or RSS feed .


#rabbit-holes

CRISPR fungus: Protein-packed, sustainable, and tastes like meat

Hacker News
www.isaaa.org
2025-12-12 00:59:46
Comments...
Original Article
Timed out getting readerview for https://www.isaaa.org/kc/cropbiotechupdate/article/default.asp?ID=21607

The Mostest Anarchist

Portside
portside.org
2025-12-12 00:46:09
The Mostest Anarchist jay Thu, 12/11/2025 - 19:46 ...
Original Article

These days, and once again, “anarchism” has become an official swear word. For Trump’s Justice (or injustice) Department, every anarchist appears to be a violent member of Antifa, remarkably so because “Antifa” does not actually exist as an organization or even a fixed set of ideas.

A little history should help. Scholars date the rise of anarchism to the early nineteenth century and the British thinker William Godwin, then to the famed utopians like Proudhon. Only with the middle of that century does something clearly and (somewhat) strategically emerge as “anarchism”— simultaneously with something that will compete with Marxism but also serve as a fellow-actor in the global struggle to overcome capitalism.

Johann Most, Life of a Radical
By Tom Goyens
University of Illinois Press; 296 pages
December 9, 2025
Paperback:  $29.95;  E-book:  $14.95
ISBN: 978-0-252-08903-9  and  978-0-252-04847-0

University of Illinois Press

Even today, after a mountain of scholarly literature in many languages has found its audience, few seem to grasp that until the 1890s at the earliest, these two, mostly competing ideologies had about the same number of sympathizers and activists. Not only did the followers of Bakunin and Marx struggle with each other, but hundreds of thousands of local activists shifted back and forth in ideas and tactics across the US, Europe and far beyond.

Thus Marx’s lieutenant in New York, music teacher Frederich Sorge, famously purged an actual majority of the US wing of the First International in 1871, including nearly all of its women members—denied employment yet still guilty of not being wage earners. They were also suspiciously anarchistic. The mass movement around the struggle for Eight Hour Day, in Chicago (known then as “Little Paris”) a decade later, was nevertheless led by social revolutionaries, i.e. , anarchists, with a vigorous press and organized social life behind their politics.

Repression had a terrible effect on anarchists and not only in the US. Nevertheless, until the 1890s at the earliest, anarchists competed freely with socialists, more popular in some places, less in others. After 1900, they held attention in syndicalist-minded labor movements including the IWW, offering to the public talented educators and agitators (free love and abortion, among other sentiments), also experimental education and cooperative community experiments. Among a few groups, Mexican-Americans among them, their influence continued strongly. In general, 1920 marks a downward turning point. As oldtimers told me almost sixty years ago, in letters to the office of magazine Radical America , the world before 1920 had seemed more open as well as more cheerful. After that, the Russian Revolution offered solace as much as hope. Somewhere, at least, capitalism had been conquered.

Here we come to the magnetic, wild-eyed Johann Most, the very avatar of anarchism. Tom Goyens’ biography is the best writing ever on Most, by a long stretch, and is likely to remain so. The real Most emerges as a highly cultured German artisan by training, a talented and prolific writer and energetic newspaper editor with a powerfully caustic sense of humor. Most of all, however, he was an orator. A bit like Luigi Galleani, the Italian-American anarchist said to set off riots by the sound of his voice, Most stirred the blood of listeners, sometimes to the boiling point.

Most believed through much of his life that he should have had a great destiny as an actor, cut off by a scar caused by an infected jaw, in his youth, treated with surgery that saved his life but also shifted the bones in his face. The effect of this personal tragedy upon his volatile temper has always invited speculation.

Born out of wedlock in 1846 to a would-be musician/actor and the daughter of a military officer, Most showed early promise in school—until expelled, after leading a strike against his French teacher in Augsburg, Germany. As an apprentice and then artisan bookbinder, he traveled widely across Europe. The First International formed in 1864, and as a corresponding secretary for an educational association in Switzerland, he embraced socialistic ideas. By 1871, he had already become a promising and locally admired radical leader. He also experienced the first wave of repression that remained his lot in life.

He could address crowds of thousands in Vienna, entertaining them with jokes and banter along with messages of hatred for the rich. Critics would say that he invited persecution. Indeed, when police in Vienna were given orders to arrest dissidents, Most refused to go into hiding. On the witness stand in 1870, he defended himself with inflammatory language and was convicted of High Treason. Expelled from Austria, he became editor of a socialist newspaper in Chemnitz, Germany. By 1872, under arrest again—falsely convicted for leading a strike that he regarded privately as doomed—he had become a public personality across much of the German-speaking world.

Already, the same year, he published a selection of fifty proletarian songs, some of them his own, at the historical moment when socialist rallies so relied upon camaraderie that prospective attendees were urged to bring along their songbooks! Here, the time and place become especially alive in the biography, inviting us to mull the radical cultures of their time and ours.

It has never been quite accurate to categorize Most as a rigid anarchist, even as he drifted toward that camp around Bakunin rather than Marx. Then and throughout his subsequent activities, he set his task as awakening working people to their proper destiny. Like other German socialists, he looked back upon a (perhaps) egalitarian ancient world, viewing the rise of the State and a propertied ruling class as the antecedent to real progress. Even by the middle 1870s, he proposed a peaceful transformation, first of working class culture itself, and then over property/class conditions, as peacefully as possible.

In 1878, a failed attempt by a lowly plumber to assassinate Emperor Wilhelm I prompted what became known as the “Anti-Socialist Law.” Most lost his editor’s job, his pamphlets were confiscated and a court sentenced him to six months in prison. He fled to London in the last month of that year. There, cafe life glowed with the fire of political exiles from across Europe. It did not help that the German Social Democratic Party formally expelled him, in his absence, in an attempt to make themselves more respectable.

When the Red Scare spread to Britain, he found himself behind bars again and his exile-newspaper, Freiheit , founded in 1879, could no longer be smuggled into Germany. Just a year after he left Germany, he left Europe altogether, for the US. There, he found himself in the fresh excitement of a rising labor movement and a “social revolutionary” following in various cities, mostly but not entirely German-American in composition. His speeches held crowds and his pamphlets circulated widely. He moved Freiheit, more and more an organ of his personal and notably literary expression, across the Atlantic.

For the rest of his life, he struggled to escape the political straight-jacket that the contemporary press put upon him: Most as the apostle of terror. A campaign of assassinations and robberies in Austria and Germany, widely credited to the followers and sometime associates of Most, brought jail sentences and even executions. In the US, and despite persecutions, only the Haymarket Martyrs of Chicago were put to death—on non-existent evidence.

Socialists in the US, seeking to relaunch their movement after several false starts, had attempted to distance themselves from Most and his ideas. The contrast and the hard feelings would remain. More to the point, Most himself or rather his image in newspaper cartoons and drawings, had already come to represent anarchism-in-the-flesh. The police repression in 1886 spread from Chicago across the US, and not only where German-Americans could be found. The same year saw the apex of the Knights of Labor, sweeping through textile villages and factory towns, especially but not only among Irish-Americans. Notwithstanding the peaceful content of the strike wave, meetings were broken up and strikes busted. All of this seemed to push socialists further away from Most.

Meanwhile, Most himself seems to have changed the nature of his anarchist beliefs. By the late 1880s, he felt himself closer to the violence-shunning philosophy of Peter Kropotkin and Elisee Reclus. He translated the published pamphlets by the Prince, urging social transformation on a cultural, almost spiritual basis.

He still might have been isolated. Providentially, the 1890s saw the influx of Italian and Jewish immigrants to the US, groups bringing with them their own versions of anarchism. Many Jewish immigrants could read German and catch up with Freiheit , which appeared intermittently. Unlike German-American anarchists, the new Jewish anarchist arrivals flocked to fledgling unions.

Most, meanwhile, found a new companion and, decades after an unsuccessful first marriage in Germany, an eventual second wife, Helene Minkin, who worked in a corset factory but would eventually take over a lot of the production and editorial work of his paper. Emma Goldman, another new acquaintance, became for a while his lover and his student. Each of the women were a generation younger, but Goldman had other lovers while Minkin would be wholly devoted to Most.

Defending himself for an incendiary address in a beer hall in 1889, he became a major figure in the fight for freedom of speech, attracting wide press attention and on his release in 1892, subject of a thunderous welcome in Cooper Union. He also made a further turn toward education, more firmly renouncing individual violence that he had been leaning away from for several years. Emma Goldman, enraged at his disavowal of Alexander Berkman in seeking to assassinate steel executive Henry Clay Frick, leaped onto a stage where Most was speaking and struck him with horsewhip!

He had already signalled his eagerness to encompass an English-speaking audience, but found himself now more interested in the stage where, with a full beard, he could hide his scar. In either language, he could dramatize the struggles of the poor against the rich. His leading performance in Hauptmann’s “Der Weber” (The Weavers) met with audience enthusiasm. He also resumed his lecture appearances, in German and English, to eager audiences, punished by the economic crisis of the 1890s.

The assassination of President McKinley in 1901 inevitably brought a fresh wave of repression, and the commercial press association of anarchism with violence. He went back on trial, this time for the crime of merely publishing Freiheit , and once against faced conviction, with a sentence of a year’s imprisonment. Released from prison at age 57, weakened by age and political frustrations, he resolved to write his memoirs.

Some described Most, at this age, as broken, Yet he set out on one lecture tour after another. Too old to become part of the Industrial Workers of the World at its founding convention in 1905, he had become an admired senior revolutionary. He died on the road in March of that year, leaving behind Minkin and their two sons. At his funeral in Cincinnati, an anarchist choir sang, and the head of the local brewers’ union praised him as the true friend of the working class. Even some of the socialist notables who had attacked him took the opportunity to pay their respect during the very large commemorative event for him at the Grand Central Palace on 43rd Street, New York.

Most had successfully published three volumes of his memoirs. His widow, working as a midwife in the Bronx, raised the children, wrote her own memoir in the Jewish Forverts , and lived until 1954. I can recall hearing a play-by-play radio broadcast of the Boston Celtics by “the famous Johnny Most” sometime during the 1980s. Like a small number of other listeners, I was sufficiently interested to look up the connection. Johnny, like his father, had a fine speaking voice.##

[ Paul Buhle , in youth a syndicalist and later a student syndicalist, has a nagging sympathy for the peaceful wing of anarchism.]

Stoolap: High-performance embedded SQL database in pure Rust

Hacker News
github.com
2025-12-12 00:28:24
Comments...
Original Article

Overview

Stoolap is an embedded SQL database with MVCC transactions, written entirely in Rust. It supports both in-memory and persistent storage modes with full ACID compliance.

Installation

# Add to Cargo.toml
[dependencies]
stoolap = "0.1"

Or build from source:

git clone https://github.com/stoolap/stoolap.git
cd stoolap
cargo build --release

Quick Start

As a Library

use stoolap::Database;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let db = Database::open_in_memory()?;

    db.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)", ())?;
    db.execute("INSERT INTO users VALUES (1, 'Alice')", ())?;

    for row in db.query("SELECT * FROM users", ())? {
        let row = row?;
        println!("{}: {}", row.get::<i64>(0)?, row.get::<String>(1)?);
    }

    Ok(())
}

Command Line

./stoolap                                    # In-memory REPL
./stoolap --db "file:///path/to/data"        # Persistent database
./stoolap -q "SELECT 1 + 1"                  # Execute query directly

Features

MVCC Transactions

Full multi-version concurrency control with two isolation levels:

-- Read Committed (default)
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;

-- Snapshot Isolation
BEGIN TRANSACTION ISOLATION LEVEL SNAPSHOT;
SELECT * FROM accounts;  -- Consistent view throughout transaction
COMMIT;

Time-Travel Queries

Query historical data at any point in time:

-- Query data as it existed at a specific timestamp
SELECT * FROM orders AS OF TIMESTAMP '2024-01-15 10:30:00';

-- Query data as of a specific transaction
SELECT * FROM inventory AS OF TRANSACTION 1234;

-- Compare current vs historical data
SELECT
    current.price,
    historical.price AS old_price
FROM products current
JOIN products AS OF TIMESTAMP '2024-01-01' historical
    ON current.id = historical.id
WHERE current.price != historical.price;

Index Types

Stoolap automatically selects optimal index types, or you can specify explicitly:

-- B-tree: Range queries, sorting, prefix matching
CREATE INDEX idx_date ON orders(created_at) USING BTREE;
SELECT * FROM orders WHERE created_at BETWEEN '2024-01-01' AND '2024-12-31';

-- Hash: O(1) equality lookups
CREATE INDEX idx_email ON users(email) USING HASH;
SELECT * FROM users WHERE email = 'alice@example.com';

-- Bitmap: Low-cardinality columns, efficient AND/OR
CREATE INDEX idx_status ON orders(status) USING BITMAP;
SELECT * FROM orders WHERE status = 'pending' AND priority = 'high';

-- Multi-column composite indexes
CREATE INDEX idx_lookup ON events(user_id, event_type, created_at);
SELECT * FROM events WHERE user_id = 100 AND event_type = 'click';

Window Functions

Full support for analytical queries:

SELECT
    employee_name,
    department,
    salary,
    ROW_NUMBER() OVER (PARTITION BY department ORDER BY salary DESC) as rank,
    salary - LAG(salary) OVER (ORDER BY hire_date) as salary_change,
    AVG(salary) OVER (PARTITION BY department) as dept_avg,
    SUM(salary) OVER (ORDER BY hire_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as running_total
FROM employees;

Common Table Expressions

Including recursive queries:

-- Non-recursive CTE
WITH high_value_orders AS (
    SELECT * FROM orders WHERE amount > 1000
)
SELECT customer_id, COUNT(*) FROM high_value_orders GROUP BY customer_id;

-- Recursive CTE (e.g., organizational hierarchy)
WITH RECURSIVE org_chart AS (
    SELECT id, name, manager_id, 1 as level
    FROM employees WHERE manager_id IS NULL

    UNION ALL

    SELECT e.id, e.name, e.manager_id, oc.level + 1
    FROM employees e
    JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart ORDER BY level, name;

Advanced Aggregations

-- ROLLUP: Hierarchical subtotals
SELECT region, product, SUM(sales)
FROM sales_data
GROUP BY ROLLUP(region, product);

-- CUBE: All possible subtotal combinations
SELECT region, product, SUM(sales)
FROM sales_data
GROUP BY CUBE(region, product);

-- GROUPING SETS: Specific grouping combinations
SELECT region, product, category, SUM(sales)
FROM sales_data
GROUP BY GROUPING SETS ((region, product), (category), ());

Subqueries

Scalar, correlated, EXISTS, and IN subqueries:

-- Correlated subquery
SELECT * FROM employees e
WHERE salary > (SELECT AVG(salary) FROM employees WHERE department = e.department);

-- EXISTS
SELECT * FROM customers c
WHERE EXISTS (SELECT 1 FROM orders o WHERE o.customer_id = c.id AND o.amount > 1000);

-- IN with subquery
SELECT * FROM products
WHERE category_id IN (SELECT id FROM categories WHERE active = true);

Query Optimizer

Cost-based optimizer with statistics:

-- Collect table statistics
ANALYZE orders;

-- View query execution plan
EXPLAIN SELECT * FROM orders WHERE customer_id = 100;

-- View plan with actual execution statistics
EXPLAIN ANALYZE SELECT * FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE c.country = 'US';

Data Types

Type Description Example
INTEGER 64-bit signed integer 42 , -100
FLOAT 64-bit floating point 3.14 , -0.001
TEXT UTF-8 string 'hello' , '日本語'
BOOLEAN true/false TRUE , FALSE
TIMESTAMP Date and time '2024-01-15 10:30:00'
DATE Date only '2024-01-15'
TIME Time only '10:30:00'
JSON JSON data '{"key": "value"}'

Built-in Functions

String Functions

UPPER , LOWER , LENGTH , TRIM , LTRIM , RTRIM , CONCAT , SUBSTRING , REPLACE , REVERSE , LEFT , RIGHT , LPAD , RPAD , REPEAT , POSITION , LOCATE , INSTR , SPLIT_PART , INITCAP , ASCII , CHR , TRANSLATE

Math Functions

ABS , CEIL , FLOOR , ROUND , TRUNC , SQRT , POWER , MOD , SIGN , GREATEST , LEAST , EXP , LN , LOG , LOG10 , LOG2 , SIN , COS , TAN , ASIN , ACOS , ATAN , ATAN2 , DEGREES , RADIANS , PI , RAND , RANDOM

Date/Time Functions

NOW , CURRENT_DATE , CURRENT_TIME , CURRENT_TIMESTAMP , EXTRACT , DATE_TRUNC , DATE_ADD , DATE_SUB , DATEDIFF , YEAR , MONTH , DAY , HOUR , MINUTE , SECOND , DAYOFWEEK , DAYOFYEAR , WEEK , QUARTER , TO_CHAR , TO_DATE , TO_TIMESTAMP

JSON Functions

JSON_EXTRACT , JSON_EXTRACT_PATH , JSON_TYPE , JSON_TYPEOF , JSON_VALID , JSON_KEYS , JSON_ARRAY_LENGTH

Aggregate Functions

COUNT , SUM , AVG , MIN , MAX , STDDEV , STDDEV_POP , STDDEV_SAMP , VARIANCE , VAR_POP , VAR_SAMP , STRING_AGG , ARRAY_AGG , FIRST , LAST , BIT_AND , BIT_OR , BIT_XOR , BOOL_AND , BOOL_OR

Window Functions

ROW_NUMBER , RANK , DENSE_RANK , NTILE , LAG , LEAD , FIRST_VALUE , LAST_VALUE , NTH_VALUE , PERCENT_RANK , CUME_DIST

Other Functions

COALESCE , NULLIF , CAST , CASE , IF , IIF , NVL , NVL2 , DECODE , GREATEST , LEAST , GENERATE_SERIES

Persistence

Stoolap uses write-ahead logging (WAL) with periodic snapshots:

# In-memory (default) - data lost on exit
./stoolap --db "memory://"

# File-based - durable storage
./stoolap --db "file:///var/lib/stoolap/data"

Features:

  • WAL : All changes logged before applied, survives crashes
  • Snapshots : Periodic full database snapshots for faster recovery
  • Index persistence : All indexes saved and restored

Architecture

src/
├── api/        # Public API (Database, Connection, Rows)
├── core/       # Types (Value, Row, Schema, Error)
├── parser/     # SQL lexer and parser
├── planner/    # Query planning
├── optimizer/  # Cost-based query optimizer
├── executor/   # Query execution engine
├── functions/  # 100+ built-in functions
│   ├── scalar/     # String, math, date, JSON
│   ├── aggregate/  # COUNT, SUM, AVG, etc.
│   └── window/     # ROW_NUMBER, RANK, LAG, etc.
└── storage/    # Storage engine
    ├── mvcc/       # Multi-version concurrency control
    └── index/      # B-tree, Hash, Bitmap indexes

Building

cargo build              # Debug build
cargo build --release    # Release build (optimized)
cargo test               # Run tests
cargo clippy             # Lint
cargo doc --open         # Generate documentation

Contributing

See CONTRIBUTING.md for guidelines.

License

Apache License 2.0. See LICENSE .

Notes on Gamma

Lobsters
poniesandlight.co.uk
2025-12-12 00:27:22
Comments...
Original Article

Gamma is a blight, a curse, and utterly annoying. Ever since somebody told me that RGB colours need to be Gamma corrected, RGB colours were spoilt for me. Gamma does to digital colour what kerning does to typography.

This post is an attempt to get even with Gamma.

Lost Innocence

Because, you see, only once I became fully aware of Gamma, things really started to fall apart. In my pre-gamma-aware innocence, I must have done some things right.

Let me show what I mean:

Here, I generate a linear gradient using a GLSL fragment shader. Say, drawing a full-screen quad using this shader code snippet:

1
2
vec3 color = vec3(uv.x);       // increase brightness linearly with x-axis
outFragColor = vec4(color, 1); // output to RGB swapchain image 

glsl

And voila - a perceptually linear gradient.

 A perceptually linear gradient
A perceptually linear gradient, innocently created

But now, let’s get clever about this: We notice that we’re actually drawing to an sRGB monitor (most desktop monitors are, nowadays), so we should probably use an sRGB image for the swapchain (these are the most common 8bpc (read: “bits per channel”) swapchain image formats in Vulkan). Thus we somehow need to convert our RGB colours to sRGB. The most elegant way to do this is to use an image attachment with an sRGB format, which (at least in Vulkan) does the conversion automatically and precisely on texel save.

If we draw the same shader as before, but now into an sRGB swapchain the swapchain image’s non-linear rRGB encoding should correct the sRGB monitor gamma. The gradient’s brightness values (as measured by a brigthness meter) should now increase linearly on-screen. They do. That should look like a linear gradient, right?

Wrong.

Instead, we get this:

 A linear gradient
A perfectly ’linear’ gradient

This doesn’t look linear: shades don’t seem to claim equal space. Instead, it looks as if dark shades are too bright, while bright shades wash out. Here’s what I’d expected to see:

 A perceptually linear gradient
A perceptually linear gradient

The difference is subtle, but look at both gradients and ask yourself: which seems to have more detail? Which is more balanced?

Even though the first gradient is more linear in terms of physical brightness, the second one looks more linear.

I find this counter-intuitive. But where intuition fails, ratio may help; and there is indeed a rational explanation: Visual perception is non-linear.

And You See The Ones In Darkness/Those In Brightness Drop From Sight

The human eye can distinguish more contrast detail in darker shades.

You don’t have to take my word for it; instead take those of Dr. Charles Poynton – the person who gave HDTV square pixels and the number 1080 . Here is a diagram of how our perception tends to respond to changes in lightness, which I found in his dissertation :

brightness perception
CIE Lightness ( Poynton 2018 , pg. 17). Note how perception biases up around darker shades.

CIE Lightness, when defined as a (relative) curve of just noticeable differences, fits very nicely a power function with exponent (0.42), with a small linear bit below relative luminance of about 1%.

This is fine. Nature. It probably helped our ancestors to survive or something. And it does explain why our physically linear gradient looked too bright in dark areas. Let’s draw a diagram:

A perceptually linear gradient
A linear gradient getting biased
in black : the biases
in red : the signal at this point in the chain

Check the Bias

Whenever we want to draw a perceptually linear gradient, we must remember to pay our dues to evolution, and factor in this perceptual bias.

If you want the appearance of a linear gradient, you must display a non-linear gradient , one that tunes down darker parts. Effectively, you want to apply the inverse of the non-linearity that is introduced by perception. Confusingly, this may be done automatically for you if you forget to do any gamma correction:

Two-Penny Gamma Correction

If you have an sRGB monitor and you innocently don’t do any sRGB correction (by rendering into a linear RGB framebuffer such as FORMAT_R8G8B8A8_UNORM for example), linear gradient values will get biased by the monitor’s gamma response alone – the result will be a gradient that looks “about linear”.

A perceptually linear gradient
Our perceptually linear gradient, innocently created

It looks “about linear” because what happens is that while the monitor will “gamma” the gradient, your eye will “de-gamma” the gradient again – and since these two non-linear effects on the signal (monitor, eye) are almost inverses of each other, we get a linear perceived signal at the end.

And here’s a cool thing: This was by design!

"The nonlinearity of a CRT is very nearly the inverse of the lightness sensitivity of human vision. The nonlinearity causes a CRT’s response to be roughly perceptually uniform. Far from being a defect, this feature is highly desirable."

Why is this desirable?

A big reason for encoding images in sRGB is that, because of sRGB’s perceptual nature, we get much better perceptual luminance contrast resolution out of 8bits per channel. Instead of wasting bits on high brightnesses where our eye has trouble noticing change, we spend most of the bit-budget where it counts: on darker shades.

sRGB is an elegant form of perceptual compression: images at 8-bits-per-channel and below (for reasons of encoding efficiency) really want to be encoded as sRGB.

And if the monitor can display these sRGB images directly and natively (because the hardware applies the inverse of the sRGB gamma transform) – that’s just a perfect match…

Practical applications for rendering using Vulkan

In Vulkan, if you can use an sRGB format for your swapchain image, then you don’t have to manually correct for sRGB gamma. your pixels will be automatically stored in non-linear sRGB, and displayed linearly on-screen. this is great for encoding the highest amount of perceptual colour contrast detail using limited amount of bits (sRGB formats are usually about 8 bit per channel).

When image format names contain the suffix *_SRGB the sRGB gamma transform is applied transparently (automatically, without you having to do anything) on every texel read (transform from non-linear-space into linear-space) and texel write (transform from linear space into non-linear-space) . This is useful, because we can only meaningfully blend color in linear space, blending in non-linear space would not be (physically) correct. The specs on Khronos Data Formats have some great documentation on this topic.

But this means that you need to undo the effect of the implicit sRGB gamma transform on texel write if you want to render a perceptually linear gradient while using an sRGB image backing. The function that the vulkan driver applies for you on texel write is called the srgb_oetf , short for “sRGB optical-electrical transfer function”.

To neutralise this function, you must apply the inverse_srgb_oetf , that’s the srgb_eotf “sRGB electo-optical transfer function”, just before you store the texel.

Here is how this would look like using our schematic from before:

A perceptually linear gradient using srgb
A perceptually linear gradient, corrected, and encoded using sRGB

Draw a perceptually linear gradient into an sRGB swapchain image

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
vec3 srgb_eotf(in const vec3 c) {
    bvec3 cutoff = lessThan(c, vec3(0.04045));
    vec3 lower = c/vec3(12.92);
    vec3 higher = pow((c + vec3(0.055))/vec3(1.055), vec3(2.4));
    return mix(higher, lower, cutoff);
}

void main () {
    vec3 color = vec3(uv.x);
    outFragColor = vec4(srgb_eotf(color),1);
}

glsl

And voila:

 A perceptually linear gradient
A perceptually linear gradient

Further reading:

Bonus

Thank you sticking around past the end credits. Here’s an extra bit of information that might come in handy at a cgi pub quiz one day:

Did you ever wonder what the “s” in RGB stands for? Me neither – until now; I assumed it stood for super . The sad reality is more humble: Apparently it stands for Standard . “Standard RGB”. What a standard.

RSS:

Find out first about new posts by subscribing to the RSS Feed

Further Posts:

The Weird Way the 404 Media Zine Was Built

Hacker News
tedium.co
2025-12-12 00:05:36
Comments...
Original Article

I write a lot these days, but my path into journalism, going way back to J-School, was through layout.

For years, I was a graphic designer at a number of newspapers—some fairly small, some quite large. I was a card-carrying member of the Society for News Design. It was one of my biggest passions, and I fully expected to have a long career in newspaper design. But newspapers as a medium haven’t really panned out, so I eventually fell into writing.

But I still adore laying out a big project, conceptualizing it, and trying to use it to visually add to the story that the words are trying to convey. It’s not quite a lost art, but I do think that print layout is something that has been a bit back-burnered by society at large.

So when 404 Media co-founder Jason Koebler, who spent years editing my writing for Motherboard , reached out about doing a zine , I was absolutely in. The goal of the zine —to shine a spotlight on the intersection of ICE and surveillance tech—was important. Plus, I like working with Jason, and it was an opportunity to get into print design again after quite a few years away.

I just had two problems: One, I have decided that I no longer want to give Adobe money because of cost and ethical concerns about its business model. And two, I now use Linux pretty much exclusively ( Bazzite DX , in case you’re wondering).

But the good news is that the open-source community has done a lot of work, and despite my own tech shifts, professional-grade print design on Linux is now a viable option.

zine_affinity.jpg
What my Affinity interface looks like. It’s the Windows version, but it’s running on Linux.

Why page layout on Linux is fairly uncommon

The meme in the Linux community writes itself: “I would move over to Linux, but I need Photoshop and InDesign and [insert app here] too much.” In the past, this has been a real barrier for designers, especially those who rely on print layout, where open-source alternatives are very limited. (They’ve also been traditionally at the mercy of print shops that have no time for your weird non-standard app.)

Admittedly, the native tools have been getting better. I’m not really a fan myself, but I know GIMP is getting closer in parity to Photoshop. Inkscape is a totally viable vector drawing app. Video is very doable on Linux thanks to the FOSS Kendenlive and the commercial DaVinci Resolve . Blender is basically a de facto standard for 3D at this point. The web-based Penpot is a capable Figma alternative. And Krita , while promoted as a digital painting app, has become my tool of choice for making frame-based animated GIFs, which I do a lot for Tedium.

But for ink-stained print layout nerds, it has been tougher to make the shift (our apologies to Scribus ). And Adobe locks down Creative Cloud pretty hard.

However, the recent Affinity release, while drawing some skepticism from the open-source community as a potential enshittification issue, is starting to open up a fresh lane. For those not aware, the new version of Affinity essentially combines the three traditional design apps—vector editor, raster editor, and page layout—into a single tool. It’s pretty good at all three. (Plus, for business reasons related to its owner Canva, it’s currently free to use.)

localhost-mj0vxkph.jpg
I’ve tested a few methods for making Affinity work on WINE, but the one I’ve found most flexible is by using the tool Lutris, which is meant to run games but I’m using to run design software.

While it doesn’t have a dedicated Linux version, it more or less runs very well using WINE , the technology that has enabled a Linux renaissance via the Steam Deck . (Some passionate community members, like the WINE hacker ElementalWarrior , have worked hard to make this a fully-fleshed out experience that can even be installed more or less painlessly.)

The desire for a native Linux version of a pro-level design app is such that the Canva subsidiary is thinking about doing it themselves .

But I’m not the kind of person who likes to wait, so I decided to try to build as much of the zine as I could with Affinity for page layout. For the few things I couldn’t do, I would remote into a Mac.

The RISO factor

Another consideration here is the fact that this zine is being built with Risograph printing , a multicolor printing approach distinct from the more traditional CMYK . The inky printing process, similar to screen printing, has a distinct, vibrant look, even if it avoids the traditional four-color approach (in our case, using layers of pink, black, and lime green).

Throughout the process, I spent a lot of time setting layers to multiply to ensure the results looked good, and adding effects like halftone and erase to help balance out the color effects. This mostly worked OK, though I did have some glitches.

At one point, a lime-green frog lost much of its detail when I tried to RISO-fy it, requiring me to double-check my color settings and ensure I was getting the right tone. And sometimes, PDF exports from Affinity added unsightly lines, which I had to go out of my way to remove. If I was designing for newspapers, I might have been forced to come up with a quick plan B for that layout. But fortunately, I had the luxury of not working on a daily deadline like I might have back in the day.

I think that this layout approach is genuinely fascinating—and I know Jason in particular is a huge fan of it. Could I see other publications in the 404 mold taking notes from this and doing the same thing? Heck yes.

localhost-mizqd8yv.jpg
A sneak peek at the inside layout of the 404 Media zine.

The ups and downs of print layout on Linux

So, the headline you can take away from this is pretty simple: Laying stuff out in Affinity over Linux is extremely doable, and if you’re doing it occasionally, you will find a quite capable tool.

Admittedly, if this was, like, my main gig, I might still feel the urge to go back to MacOS—especially near the end of the process. Here’s what I learned:

The good: Workflow-wise, it was pretty smooth. Image cutouts—a tightly honed skill of mine that AI has been trying to obsolete for years—were very doable. Affinity also has some great effects tools that in many ways beat equivalents in other apps, such as its glitch tool and its live filter layers. It didn’t feel like I was getting a second-class experience when all was said and done.

The bad: My muscle memory for InDesign shortcuts was completely ineffective for this, and there were occasional features of InDesign and Photoshop that I did not find direct equivalents for in Affinity. WINE’s file menus tend to look like old Windows, which might be a turn-off for UX purists, and required a bit of extra navigation to dig through folders. Also, one downside of WINE that I could not work past was that I couldn’t use my laptop’s Intel-based GPU for machine learning tasks, a known bug that I imagine slowed some things down on graphically intensive pages.

localhost-mizsy5w4.jpg
I checked, by the way; this was not a WINE thing, it did this in MacOS too. (Ernie Smith)

The ugly: I think one area Affinity will need to work on as it attempts to sell the idea that you can design in one interface are better strategies to help mash down content for export. At one point while I was trying to make a PDF, Affinity promised me that the file I would be exporting was going to be 17 exabytes in size, which my SSD was definitely not large enough for. That wasn’t true, but it does emphasize that the dream of doing everything in one interface gets complicated when you want to send things to the printer. Much of the work I did near the end of the process was rasterizing layers to ensure everything looked as intended.

When I did have to use a Mac app for something (mainly accessing Spectrolite , a prepress app for RISO designs), I accessed an old Hackintosh using NoMachine , a tool for connecting to computers remotely. So even for the stuff I actually needed MacOS for, I didn’t need to leave the comforts of my janky laptop.

Looking for a Big Tech escape hatch

Was it 100% perfect? No. Affinity crashed every once in a while, but InDesign did that all the time back in the day. And admittedly, an office full of people using Affinity on Linux isn’t going to work as well as one guy in a coffee shop working with a team of editors over chat and email.

But it’s my hope that experiences like mine convince other people to try it, and for companies to embrace it. Affinity isn’t open-source, and Canva is a giant company with plenty of critics, just like Adobe. But there are emerging projects like PixiEditor and Graphite that could eventually make print layout an extremely viable and even modern open-source endeavor.

But we have to take victories where we can find them, and the one I see is that Affinity is a lot less locked down than Creative Cloud, which is why it’s viable on Linux. And in general, this feels like an opportunity to get away from the DRM-driven past of creative software. (Hey Canva, it’s never too late to make Affinity open-source.)

Difficult reporting shouldn’t have to be tethered to the whims of Big Tech to exist. Especially when that tech—on Amazon’s cloud, using Adobe’s PDFs, through Google’s search, over Meta’s social network, with Apple’s phones, and on Microsoft’s operating system—too often causes uncomfortable tensions with the reporting. This is one step towards a better escape hatch.

Nokia N900 Necromancy – giving a new life to a classic Linux smartphone

Hacker News
yaky.dev
2025-12-12 00:04:29
Comments...
Original Article

Building a fake battery, adding a USB-C port, booting from SD card, and giving a new life to a classic Linux smartphone.

n900_01.jpg

My friend Dima sent me his old-school classic Nokia N900. The battery is very old, and it does not boot as-is. So naturally, I wanted to see if I can resurrect it.

Step 0: Is such a thing even possible?

Yes it is! (Unless there are other hardware issues)

I ran a smartphone without a battery a few years ago.

n900_02.jpg

Cut and soldered a quick prototype to connect instead of the battery. Resistors are to emulate the "normal" temperature by providing expected resistance between the third pin and ground. See link above for details.

n900_03.jpg

Hooked up a large supercapacitor to the battery pins and to a +5V source. If I recall correctly, using a capacitor without additional power did not work.

n900_04.jpg

And it boots!

Now, let's make something that can fit into the battery compartment.

Step 1: Better "battery"

These supercapacitors are nice, but way too large. After searching on Mouser, I found FM0H473ZF, 47000 mF (0.047F) capacitors in a rectangular case that is only 5mm thick.

n900_11.jpg

Ten of these (~0.5F) is enough to run the smartphone without dying.

n900_12.jpg

Capacitor contraption (TM) arranged (using a 3D-printed template) and soldered together.

n900_13.jpg

And they all fit nicely into the battery compartment. The power is provided by a wire routed through the hole for the carry loop.

n900_14.jpg

Running fine! One noticeable issue is that capacitors are getting pretty warm. Probably my sloppy soldering, but no shorts that I could find.

⚠️

This is where I should have stopped. At some point while messing with the "battery" and power, I managed to corrupt the internal partition and the installed OS. Not sure if this was from the sudden battery pull or from supplying +5V instead of the expected +4.2V to the battery pins. Luckily, newer Maemo Leste is intended to run from the SD card anyway, and internal storage still works, so I was able to overwrite it with the bootloader.

Bootloader setup on Maemo Wiki

Step 2: Consolidating connectors

I thought it might be practical to power the "battery" through the existing USB port. Just run the +5V wire from USB to the "battery", and avoid additional wires. (If you think this is kinda stupid, you are right)

n900_21.jpg

Yooo... What is happening here? Dima says "oh yeah, the USB port was re-soldered. Twice". A quick glance at the forums also confirms that USB port was poorly designed and is prone to breaking.

n900_22.jpg

Just one wire from the +5V pad to the "battery". The ground is the same as the battery pin.

n900_23.jpg

Assembled everything back, routed and soldered the +5V wire, and added a diode to prevent the battery from feeding the USB port, and to drop the voltage to more acceptable ~4.3V.

n900_24.jpg

The setup works, but the smartphone constantly shows either "Charging", or "Device using more power than it is receiving from the PC. Charging with a compatible charger is recommended", with battery gauge going crazy.

And then, the power just cut out.

Yeah, this was not a great idea. Let's see what happened.

n900_25.jpg

USB +5V wire detached itself from the port. I presume this is from either the high current, age, stress, or corrosion.

However, when I opened the smartphone up, I... ripped off the +5V pad. (dark circle in lower right on the photo)

Fuck.

After reading some N900 forums, that +5V pad is a common place to connect the replacement USB port to (which was done here), but... that is the ONLY +5V connection on the board besides the pads under the USB port itself.

FUCK!

🪦

RIP Nokia N900. I tried to resurrect you, but instead, I killed your OS and ripped out the USB port wires.

Step 3: Radical replacements

To be fair, N900 is far from dead. I already flashed u-boot, was able to boot from SD card, and do not plan to use internal storage otherwise. Power can be supplied entirely through the new "battery". So technically, I do not need the USB functionality for the smartphone itself, just to power the "battery". At this point, I might as well replace the port with USB-C. Because why not.

n900_31.jpg

Approximate placement of the new USB port.

The location of the original port is not very convenient. It is sandwiched between the main board and the SD card reader (lower left on the photo). SD card reader is also attached by a permanently-attached ribbon (i.e. nearly irreplaceable).

First, I used a small file to make the micro-USB-shaped hole on the smartphone body fit the USB-C shape. Then, I took a small 6-pin USB-C port, cut and sanded down its plastic parts to make it fit in the original spot. It is still slightly (~0.25mm) taller than the original, but I cannot make it any slimmer.

I tried to attach the USB-C port to the board in the correct place by carefully assembling the board, port and SD card reader into the body, and using small drops of glue to lightly affix the edge of the USB port (that I could reach) to the main board. The intent was to wait for glue to cure, take everything back apart and glue the port in its now-correct position for good. This took several tries but did not really work, as the port got detached while removing the main board every time, and the the superglue I used left lots of residue but did not adhere. Luckily, the tight fit and the shape of the USB-C port hold it in place mechanically quite well.

n900_32.jpg

USB-C with +5V and ground attached.

Originally, I planned to solder all 6 pins and add 5.1 Ohm pull-down resistors to CC1 and CC2 pins (for full power delivery functionality). But there is simply not enough space to route the wires, the narrow valley between the chips (in the lower right of the photo) barely fits 3, and I did not have anything thinner on hand.

n900_33.jpg

Nokia N900 with a USB-C port! Looks pretty nice IMO.

Since I did not solder the pull-down resistors, this USB-C port could only be powered by a "dumb" USB-A-to-USB-C cable, at default 0.5A. Chargers with power delivery functionality cannot identify such USB-C ports, and will not provide power at all. (This is also an issue with some handheld consoles such as RGB30)

n900_34.jpg

The two wires are routed to the battery compartment through a very convenient opening in the metal frame, crimped and inserted into a DuPont connector.

batt_01.jpg

Back to the battery. The capacitor contraption I built before works, but was kind of flimsy, and does not have any more space for a DuPont connector. Also, I would rather use a single capacitor, but it still has to fit. Since the original battery is unusable, I might as well try to salvage it, too.

batt_02.jpg

Take off the sticker (that tells you not to do so :). The top BCM piece is held to the main battery body by two tiny screws (hidden under some crumbly compound) on each end, double-sided sticker, and a single lead in the middle.

batt_03.jpg

Battery Control Module. Interestingly, for this battery, the body is the positive terminal. So the positive lead connects the battery body and the positive pin directly, while the negative lead goes thorough some control circuitry. Attaching a capacitor to these battery terminals should be sufficient.

batt_04.jpg

Since I have a 3D printer, and once you have one, every problem can be solved by printing stuff, I printed the new "battery" to accommodate a large capacitor, diode (for voltage drop), wires, DuPont connectors, and the original battery's BCM.

n900_35.jpg

N900 with a new "battery". Fits really tight, and only 0.25-0.5mm too tall, so the cover still snaps closed.

n900_36.jpg

Boots without problems. Since the attached capacitor is pretty large, it can take a minute or two to charge it to an acceptable level (~4.0V) with a 0.5A current.

n900_37.jpg

Nokia N900 enjoying its new life as an online radio device using Open Media Player.


Freexian Collaborators: Debian Contributions: Updates about DebConf Video Team Sprint, rebootstrap, SBOM tooling in Debian and more! (by Anupa Ann Joseph)

PlanetDebian
www.freexian.com
2025-12-12 00:00:00
Debian Contributions: 2025-11 Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services. DebConf Video Tea...
Original Article

Debian Contributions: 2025-11

Contributing to Debian is part of Freexian’s mission . This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services .

DebConf Video Team Sprint

The DebConf Video Team records, streams, and publishes talks from DebConf and many miniDebConfs. A lot of the infrastructure development happens during setup for these events, but we also try to organize a sprint once a year to work on infrastructure, when there isn’t a DebConf about to happen. Stefano attended the sprint in Herefordshire this year and wrote up a report .

rebootstrap, by Helmut Grohne

A number of jobs were stuck in architecture-specific failures. gcc-15 and dpkg still disagree about whether PIE is enabled occasionally and big endian mipsen needed fixes in systemd . Beyond this regular uploads of libxml2 and gcc-15 required fixes and rebasing of pending patches.

Earlier, Loongson used rebootstrap to create the initial package set for loong64 and Miao Wang now submitted their changes. Therefore, there is now initial support for suites other than unstable and use with derivatives.

Vendors of Debian-based products may/should be paying attention to the evolution of different jurisdictions (such as the CRA or updates on CISA’s Minimum Elements for a Software Bill of Materials ) that require to make available Software Bill of Materials (SBOM) of their products. It is important then to have tools in Debian to make it easier to produce such SBOMs.

In this context, Santiago continued the work on packaging libraries related to SBOMs. This includes the packaging of the SPDX python library (python-spdx-tools) , and its dependencies rdflib and mkdocs-include-markdown-plugin . System Package Data Exchange (SPDX), defined by ISO/IEC 5962:2021, is an open standard capable of representing systems with software components as SBOMs and other data and security references. SPDX and CycloneDX (whose python library python3-cyclonedx-lib was packaged by prior efforts this year ), encompass the two main SBOM standards available today.

Miscellaneous contributions

  • Carles improved po-debconf-manager : added checking status of bug reports automatically via python-debianbts ; changed some command line options naming or output based on user feedback; finished refactoring user interaction to rich; codebase is now flake8-compliant; added type safety with mypy .
  • Carles, using po-debconf-manager , created 19 bug reports for translations where the merge requests were pending; reviewed and created merge requests for 4 packages.
  • Carles planned a second version of the tool that detects packages that Recommends or Suggests packages which are not in Debian. He is taking ideas from dumat .
  • Carles submitted a pull request to python-unidiff2 (adapted from the original pull request to python-unidiff ). He also started preparing a qnetload update.
  • Stefano did miscellaneous python package updates: mkdocs-macros-plugin , python-confuse , python-pip , python-mitogen .
  • Stefano reviewed a beets upload for a new maintainer who is taking it over.
  • Stefano handled some debian.net infrastructure requests.
  • Stefano updated debian.social infrastructure for the “trixie” point release.
  • The update broke jitsi.debian.social, Stefano put some time into debugging it and eventually enlisted upstream assistance , who solved the problem!
  • Stefano worked on some patches for Python that help Debian:
    • GH-139914 : The main HP PA-RISC support patch for 3.14.
    • GH-141930 : We observed an unhelpful error when failing to write a .pyc file during package installation. We may have fixed the problem, and at least made the error better.
    • GH-141011 : Ignore missing ifunc support on HP PA-RISC.
  • Stefano spun up a website for hamburg2026.mini.debconf.org .
  • Raphaël reviewed a merge request updating tracker.debian.org to rely on bootstrap
    version 5.
  • Emilio coordinated various transitions.
  • Helmut sent patches for 26 cross build failures.
  • Helmut officially handed over the cleanup of the /usr-move transition .
  • Helmut monitored the transition moving libcrypt-dev out of build-essential and bumped the remaining bugs to rc-severity in coordination with the release team.
  • Helmut updated the Build-Profiles patch for debian-policy incorporating feedback from Sean Whitton with a lot of help from Nattie Mayer-Hutchings and Freexian colleagues.
  • Helmut discovered that the way mmdebstrap deals with start-stop-daemon may result in broken output and sent a patch .
  • As a result of armel being removed from “sid”, but not from “forky”, the multiarch hinter broke. Helmut fixed it.
  • Helmut uploaded debvm accepting a patch from Luca Boccassi to fix it for newer
    systemd .
  • Colin began preparing for the second stage of the OpenSSH GSS-API key exchange package split .
  • Colin caught and fixed a devscripts regression due to it breaking part of Debusine.
  • Colin packaged django-pgtransaction and backported it to “trixie”, since it looks useful for Debusine.
  • Thorsten uploaded the packages lprng , cpdb-backend-cups , cpdb-libs and ippsample to fix some RC bugs as well as other bugs that accumulated over time. He also uploaded cups-filters to all Debian releases to fix three CVEs.

GPT-5.2

Simon Willison
simonwillison.net
2025-12-11 23:58:04
OpenAI reportedly declared a "code red" on the 1st of December in response to increasingly credible competition from the likes of Google's Gemini 3. It's less than two weeks later and they just announced GPT-5.2, calling it "the most capable model series yet for professional knowledge work". Key cha...
Original Article

11th December 2025

OpenAI reportedly declared a “code red” on the 1st of December in response to increasingly credible competition from the likes of Google’s Gemini 3. It’s less than two weeks later and they just announced GPT-5.2 , calling it “the most capable model series yet for professional knowledge work”.

Key characteristics of GPT-5.2

The new model comes in two variants: GPT-5.2 and GPT-5.2 Pro. There’s no Mini variant yet.

GPT-5.2 is available via their UI in both “instant” and “thinking” modes, presumably still corresponding to the API concept of different reasoning effort levels.

The knowledge cut-off date for both variants is now August 31st 2025 . This is significant—GPT 5.1 and 5 were both Sep 30, 2024 and GPT-5 mini was May 31, 2024.

Both of the 5.2 models have a 400,000 token context window and 128,000 max output tokens—no different from 5.1 or 5.

Pricing wise 5.2 is a rare increase —it’s 1.4x the cost of GPT 5.1, at $1.75/million input and $14/million output. GPT-5.2 Pro is $21.00/million input and a hefty $168.00/million output, putting it up there with their previous most expensive models o1 Pro and GPT-4.5.

So far the main benchmark results we have are self-reported by OpenAI. The most interesting ones are a 70.9% score on their GDPval “Knowledge work tasks” benchmark (GPT-5 got 38.8%) and a 52.9% on ARC-AGI-2 (up from 17.6% for GPT-5.1 Thinking).

The ARC Prize Twitter account provided this interesting note on the efficiency gains for GPT-5.2 Pro

A year ago, we verified a preview of an unreleased version of @OpenAI o3 (High) that scored 88% on ARC-AGI-1 at est. $4.5k/task

Today, we’ve verified a new GPT-5.2 Pro (X-High) SOTA score of 90.5% at $11.64/task

This represents a ~390X efficiency improvement in one year

GPT-5.2 can be accessed in OpenAI’s Codex CLI tool like this:

codex -m gpt-5.2

There are three new API models:

OpenAI have published a new GPT-5.2 Prompting Guide .

It’s better at vision

One note from the announcement that caught my eye:

GPT‑5.2 Thinking is our strongest vision model yet, cutting error rates roughly in half on chart reasoning and software interface understanding.

I had dissapointing results from GPT-5 on an OCR task a while ago. I tried it against GPT-5.2 and it did much better:

llm -m gpt-5.2 ocr -a https://static.simonwillison.net/static/2025/ft.jpeg

Here’s the result from that, which cost 1,520 input and 1,022 for a total of 1.6968 cents .

Rendering some pelicans

For my classic “Generate an SVG of a pelican riding a bicycle” test:

llm -m gpt-5.2 "Generate an SVG of a pelican riding a bicycle"

Described by GPT-5.2: Cartoon-style illustration: A white, duck-like bird with a small black eye, oversized orange beak (with a pale blue highlight along the lower edge), and a pink neckerchief rides a blue-framed bicycle in side view; the bike has two large black wheels with gray spokes, a blue front fork, visible black crank/pedal area, and thin black handlebar lines, with gray motion streaks and a soft gray shadow under the bike on a light-gray road; background is a pale blue sky with a simple yellow sun at upper left and two rounded white clouds (one near upper center-left and one near upper right).

And for the more advanced alternative test, which tests instruction following in a little more depth:

llm -m gpt-5.2 "Generate an SVG of a California brown pelican riding a bicycle. The bicycle
must have spokes and a correctly shaped bicycle frame. The pelican must have its
characteristic large pouch, and there should be a clear indication of feathers.
The pelican must be clearly pedaling the bicycle. The image should show the full
breeding plumage of the California brown pelican."

Digital illustration on a light gray/white background with a thin horizontal baseline: a stylized California brown pelican in breeding plumage is drawn side-on, leaning forward and pedaling a bicycle; the pelican has a dark brown body with layered wing lines, a pale cream head with a darker brown cap and neck shading, a small black eye, and an oversized long golden-yellow bill extending far past the front wheel; one brown leg reaches down to a pedal while the other is tucked back; the bike is shown in profile with two large spoked wheels (black tires, white rims), a dark frame, crank and chainring near the rear wheel, a black saddle above the rear, and the front fork aligned under the pelican’s head; text at the top reads "California brown pelican (breeding plumage) pedaling a bicycle".

Elon Musk teams with El Salvador to bring Grok chatbot to public schools

Guardian
www.theguardian.com
2025-12-11 23:11:59
President Nayib Bukele entrusting chatbot known for calling itself ‘MechaHitler’ to create ‘AI-powered’ curricula Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to ...
Original Article

Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to a Thursday announcement by xAI. Over the next two years, the plan is to “deploy” the chatbot to more than 5,000 public schools in an “AI-powered education program”.

xAI’s Grok is more known for referring to itself as “MechaHitler” and espousing far-right conspiracy theories than it is for public education. Over the past year, the chatbot has spewed various antisemitic content , decried “white genocide” and claimed Donald Trump won the 2020 election .

Nayib Bukele, El Salvador’s president, is now entrusting the chatbot to create curricula in classrooms across the country. Bukele has long embraced technology, making El Salvador the first county in the world to use bitcoin as legal tender, and being one of the first Central American presidents to use Twitter, now X, as a platform. He is also known for ruling with an iron fist and working with Trump to incarcerate deportees to El Salvador’s notorious Cecot prison.

“El Salvador doesn’t just wait for the future to happen; we build it,” Bukele said in a statement about the partnership with xAI. “This partnership is destined to deliver something rather extraordinary for all of humanity.”

Musk touted his partnership with Bukele on Thursday. On X , between posts about “white genocide” and blaming asylum seekers for crime, Musk posted comments about Grok being spread throughout El Salvador’s schools.

He reposted positively to a comment from Katie Miller, the wife of Trump’s senior adviser Stephen Miller, in which she wrote: “If we are serious about restoring education to math, science and English – why would we allow left leaning liberal [sic] AI our kids? This unlocks non-woke educational tools for our kids.”

xAI is not the first artificial intelligence company to introduce chatbots to public schools. OpenAI announced a partnership with Estonia in February where it could provide all students and teachers in the country’s secondary school system with a customized ChatGPT. Students in rural Colombia also started using Meta’s AI chatbots in 2023 and within a year, teachers began blaming the tech for low grades and failing exams, according to Rest of World .

Brave browser starts testing agentic AI mode for automated tasks

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 23:03:37
Brave has introduced a new AI browsing feature that leverages Leo, its privacy-respecting AI assistant, to perform automated tasks for the user. [...]...
Original Article

Brave browser starts testing agentic AI mode for automated tasks

Brave has introduced a new AI browsing feature that leverages Leo, its privacy-respecting AI assistant, to perform automated tasks for the user.

Intended to assist with tasks such as autonomous web research, product comparison, promo-code discovery, and news summarization, the feature is currently in its testing phase and accessible through the Brave Nightly version.

The new agentic AI browsing mode is disabled by default and represents the first step towards tighter AI-user integration for the privacy-focused browser.

AI browsing mode on Leo
AI browsing mode on Leo
Source: Brave

AI risk and how Brave deals with it

Brave stresses that agentic AI browsing is "inherently dangerous" and shouldn’t be used for critical operations, mainly due to prompt injection attacks and the potential for misinterpreting users' intent.

To mitigate this risk, the new mode runs on a separate, isolated profile that does not have access to the user’s cookies, login information, and other sensitive data.

The mode will also be restricted from accessing the browser’s settings page, non-HTTPS sites, the Chrome Web Store, where it could download extensions, and any sites flagged by Brave’s Safe Browsing system.

All its actions will be visible in tabs, and anything risky will trigger warnings to the user, requesting their explicit approval.

Users prompted to take over
User prompted to take over control at checkout step
Source: Brave

Additionally, the mode will be monitored by an ‘alignment checker’ mechanism, similar to what Google announced recently for Gemini’s agentic mode on Chrome, where an isolated second model evaluates whether the agent’s actions match user intent.

Being isolated, this second model cannot be affected by prompt-injection attacks that target the primary agent.

Additionally, Brave will encode specific policy-based rules and use models trained to mitigate prompt injection, such as Claude Sonnet, to provide effective protection.

Regarding data privacy, which is Brave’s core value, the vendor says there will be no compromise. The system will keep the same ad/tracker blocking and no-logs policy, while no user data will be used for AI model training.

Testing the new mode

Those interested in testing Brave’s new agentic AI mode can do so only through Brave Nightly, after enabling the “Brave’s AI browsing” flag in ‘brave://flags.’

This will enable a button on Leo’s chat box that activates the new browsing mode.

Tester feedback to help address any issues may be submitted here , while Brave also announced it’s doubling its HackerOne bug bounty payments for in-scope submissions concerning AI browsing.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Disney wants you to AI-generate yourself into your favorite Marvel movie

Guardian
www.theguardian.com
2025-12-11 22:25:51
The media company is investing $1bn in OpenAI – and allowing its characters to be used in generated videos Users of OpenAI’s video generation app will soon be able to see their own faces alongside characters from Marvel, Pixar, Star Wars and Disney’s animated films, according to a joint announcement...
Original Article

Users of OpenAI’s video generation app will soon be able to see their own faces alongside characters from Marvel, Pixar, Star Wars and Disney’s animated films, according to a joint announcement from the startup and Disney on Thursday. Perhaps you, Lightning McQueen and Iron Man are all dancing together in the Mos Eisley Cantina.

Sora is an app made by OpenAI , the firm behind ChatGPT, which allows users to generate videos of up to 20 seconds through short text prompts. The startup previously attempted to steer Sora’s output away from unlicensed copyrighted material, though with little success, which prompted threats of lawsuits by rights holders.

Disney announced that it would invest $1bn in OpenAI and, under a three-year deal perhaps worth even more than that large sum, that it would license about 200 of its iconic characters – from R2-D2 to Stitch – for users to play with in OpenAI’s video generation app.

A man holding a lightsaber next to R2-D2; a man about to race Lightning McQueen
Examples of content generated by OpenAI's Sora with Disney properties. Photograph: OpenAI

At a time of intense anxiety in Hollywood over the impact of AI on the livelihoods of writers, actors, visual effects artists and other creatives, Disney stressed its agreement with OpenAI would not cover talent likenesses or voices.

The announcement was framed as an extraordinary opportunity to empower fans.

Think of the “fan-inspired Sora short form videos”, as Disney called them in a press release – akin to taking an AI-generated version of a photo with Princess Jasmine at Disney World. OpenAI included screenshots of these kinds of videos in its press release, indicating how the two companies expect people to use the app’s new cast. Sora already allows users to generate videos that include their own likenesses.

Bob Iger, Disney’s CEO, said the licensing deal would place “imagination and creativity directly into the hands of Disney fans in ways we’ve never seen before”.

They may even offer a chance at wide viewership, with some fan-made videos being displayed on the Disney+ streaming service, a move seemingly designed to compete with TikTok’s and YouTube Shorts’ infinite feeds, which themselves often include clips of popular TV shows and movies.

Oils 0.37.0 - Alpine Linux, YSH, and mycpp

Lobsters
oils.pub
2025-12-11 22:23:20
Comments...
Original Article

| blog | oils.pub

2025-12-11

This is the latest version of Oils, a Unix shell:

To build and run it, follow the instructions in INSTALL.txt . If you're new to the project, see Why Create a New Shell? and posts tagged # FAQ .

Intro

This is a big release, with many new contributors! I wrote an HTML version of git shortlog to show that, which you'll see below.

I published a post last week with background on what we're doing: Links to Explain Oils in 2025 .

Building Thousands of Alpine Linux Packages with OSH

As mentioned in that post, we have a test harness called regtest/aports that builds Alpine Linux packages with OSH as the only shell on the machine.

It replaces /bin/sh , /bin/ash , and /bin/bash with OSH , which is a hard test. OSH has to run decades of accumulated shell scripts.

It's found many obscure bugs , which we wouldn't find any other way! Here are a few examples, described by contributors:

Andriy Sultanov and Daveads have also fixed many OSH bugs. It's great to see so much energy on the project!

12 Disagreements Left in aports/main

You can see our overall progress here: https://op.oils.pub/aports-build/published.html

  • The first run was in August, with 131 disagreements between OSH and other shells
    • This is out of 1595 packages in aports/main , a complete Linux system
  • Two months ago, we were down to 43 disagreements
  • Now we're down to 12
    • And more importantly, we know what all the bugs are! There is one tricky architectural issue left, which may take awhile to fix. But I know we can fix it.

We also have good results for aports/community with 7100+ packages. After this experience, I think that a regtest/debian is feasible!

(I still have to send more replies regarding my October job ad — we are paving the way for more people to contribute.)

Lessons Learned

There are many things we could write about regtest/aports — it is surprisingly deep. But here are some condensed lessons:


For extra color, here's a thread that's relevant to build machine utilization . We're doing this work on a single machine with 20 cores (rented from Hetzner):

YSH

Thanks to Andriy and Dave for also improving YSH ! You can see their work in the shortlog below.


I also want to point to Will's YSH projects, some of which I mentioned in the last post too.

  • quexxon/selfysh - Transform a YSH script into a self-contained binary for simple distribution
  • quexxon/workbench/shttp - shttp is an HTTP server that invokes a subprocess to handle requests (think refreshed CGI)

mycpp

Andriy also fixed several bugs in mycpp , our Python-to-C++ translator. We're more carefully defining the subset of typed Python we support, which improves the experience of working on Oils.

Translating Oils to Rust

This is a good time to mention that I also want mycpp to translate Oils to Rust!

It could be the whole shell, or perhaps a library liboils with the core syntax and semantics of OSH and YSH .

Why? I list the reasons here: #blog-ideas > Reasons to translate Oils to Rust

  1. For tools like linters, formatters, and LSP.
  2. For the interactive shell : many users want something other than GNU readline.
  3. To further prove that Oils is an executable spec.
    • We have 2 complete implementations now, and Rust would be a third.
  4. To attract contributors. Others in the same space are using Rust: fish , Nix , brush , and many more ...
    • I think it would be better if more shell authors were working toward overlapping goals!

As I was writing this section, it got filled with technical details. So I'll save them for a future post.

This note is meant to start a discussion , so please chime in on Zulip or Github if you know about Python, Rust, and compilers!

I haven't written a full retrospective on mycpp , but it was quite an adventure. There are some fun tricks, like using constexpr to generate 100% portable object metadata for tracing GC.

Details on What's Changed - a git shortlog

Now for the list of changes! I slightly edited the output of the HTML git shortlog . (It uses the technique from this 2017 post ! The flat changelog has used that technique for years too.)

ccc83cdec [ysh] Evaluate the unary + operator (#2486)
39d8af9af [ysh] Array splice @[...] is allowed in expressions (#2482)
2d37fe3e7 [ysh] Expression sub $[42 + a[i]] is also valid in expressions, not just commands (#2491)
95f031e05 [eggex] Fix digits in char classes, like /[9]/
1c3459cf7 [ysh] Triple-quoted strings enforce ambiguous backslash rule too (#2495)
736ec0c5a [builtin/printf] %d and others support octal and hex strings as input (#2483)
6fda91d89 [osh] Support set -a, also known as set -o allexport (#2511)
Aidan
4665a12cd [test/spec] Add failing test cases for set -o noclobber (-C) (#2474)
ffeb0b5f1 [core] Fix >/dev/null and >>somefile when noclobber is set (#2503)
92844634e [mycpp/runtime] Expose a more flexible stat API to reduce syscalls in noclobber (#2518)
5eece76c7 [spec/here-doc] Add failing test case for mdev-conf failure (#2524)
35899015f [doc/ref] Remove deprecated io->evalToDict (#2525)
Andriy Sultanov
8663d2e5a [translation] Add missing case for test -b in DoUnaryOp (#2455)
7ff530d0c [builtin/trap] Support POSIX and bash usage patterns (#2453)
532ec9b55 [frontend] Fix parsing of say x=1> - redirect should not consume LHS number (#2479)
0cd06f164 [osh] Keep last value of $? when argv is [] (#2490)
d880ae829 [osh] Look ahead at (( to distinguish arithmetic and commands (#2481)
d620ba9ab [frontend] Store token length in 32 bits instead of 16 (#2497)
52781cce9 [regtest/aports] Add ability to patch packages (#2499)
e874613ef [mycpp] Handle two errors explicitly, rather than generating bad code or crashing (#2513)
7bb25c748 [spec/divergence] Add failing test case for gzip aports issue (#2531)
5b8109b8e [mycpp] Turn mutable "global" member state into private variables (#2536)
5647c933a [core] Run builtin cat in a child process (#2534)
d62fff502 [regtest/aports] Patch /usr/bin/autopoint for xz (#2533)
d18d6e397 [mycpp] Fix code generation with methods and free functions of the same name (#2517)
9bc91f993 [regtest/aports] Make the cause for #2426 more generic (#2553)
655ecbe7a [mycpp] Check that printf-style format strings are valid (#2549)
2253fb51c [mycpp] Disallow `break` inside a `switch` (#2554)
8d4e5b6ec [mycpp] Avoid crash with unreachable code (#2571)
515839308 [osh] Turn $[] into shell arithmetic, for compatibility (#2519)
80e47490a [osh/word_parse] Fix crash with case after $[] (#2585)
16d5e3eb8 [demo] More surveys of Str methods in Python/JS (#2588)
26ddfc05e [ysh builtins] Implement Str.{find,findLast,contains} and strcmp (#2590)
e15079d14 [builtin] Add and clean up "too many args" errors (#2589)
Bram Tertoolen
7d6b7ca92 [deps/wedge] Fix bash 4.4, busybox, zsh compilation with newer GCC
c2dd98beb [doc] Add note about regtest/aports dir structure (#2456)
ed5a6903d [builtin/read] Implement read -u (#2454)
77b286d04 [osh] Allow \ line continuation within "${x-arg}" (#2498)
86c2965b5 [regtest/aports] Add missing dep 'source build/dev-shell.sh' (#2526)
b046362e3 [regtest/aports] Replace " with ' in SQL files (#2527)
6a07bdd47 [regtest/aports] Associate mdev-conf with a cause (#2529)
7bd7e72bc [osh] Implement the kill builtin (#2478)
a44f92116 [regtest/aports] Add 2 new causes (#2537)
3c9862a65 [regtest/aports] Add cause patterns for jq and libidn (#2542)
2d3fe35ff [regtest/aports] Attribute more package failures to causes (#2559)
67bc79a85 [spec/known-differences] Add failing test cases for ifupdown-ng, xcb-util-renderutil (#2548)
b02bae732 [regtest/aports] More updates to cause.awk (#2564)
99e6d647b [regtest/aports] Add causes mkpasswd and gphoto2 (#2580)
853b0882b [regtest/aports] Update "rebuild" text (#2584)
19dbb2d5e [builtin/getopts] The -- arg stops flag processing (#2583)
5e3513b14 [regtest/aports] Add cause for package megacmd (#2592)
eMBee
afb65a536 [devtools] Use proper wget --directory-prefix flag (#2578)
Gabe
407e7da82 [builtin/cd] Accepts extra args for busybox ash compat, unless shopt --set strict_arg_parse (#2459)
f14838376 [oils] Add /sbin and /usr/sbin to default PATH, like bash and ash (#2504)
77b9e7721 [regtest/aports] Add causes for bmake and crosstool-ng (#2567)
d9d07757d [osh] In here docs, \" is literal rather than a escape for " (#2582)
aab8f9566 [osh] set -u failures under eval are fatal in non-interactive shells (#2576)
Melvin Walls
33f536cde [mycpp] Add GC rooting rule and re-enable Souffle (#2449)
Nindaleth
636d76b9e [doc/eggex] Fix reg_newline examples (#2568)
Paul Rigor
91a86071b [spec/divergence] Add failing test case for test builtin parsing bug (#2457)
42f20ea6b [main] Emulate BASH_VERSINFO (#2471)
Stewart Laufer
8da5037f2 [doc/getting-started] Fix typo in link to "Wiki: How To Test OSH" (#2575)

Here are some of my changes. Now that I see the builtin/trap commits below, I also want to point out:

It gives a concrete example of how we upgrade OSH into YSH ! I hope our upcoming dev guide can cover some of this.

Andy Chu
27fa3bc9b [release] Shell functions for 0.36.0
fa1744714 [demo] Add 2 demos that we've been using to debug the ulimit bug
fcf3180e4 [regtest/aports] Able to run on 'community' packages
d84ef05ee [regtest/aports] Update Oils version
f10f21ef6 [demo] Add fd-strace.sh script used to review PR
3b9269464 [regtest/aports] Remove old non-overlayfs code
a3c2fbffc [spec/builtin-trap] Minor test case improvements
21c9dab69 [frontend/signal refactor] Simplify metaprogramming
c52b099dd [builtin/trap] Fix sorting of signals
9e25f62d2 [builtin/trap refactor] Use public arg parsing API
c5d4d1079 [builtin/trap] Fix error locations when multiple trap names are passed
0f253645f [builtin/trap] Implement ParseSignalOrHook(), and start using it
cc7920f08 [builtin/trap refactor] Migrate to new parsing code
438ee197d [builtin/trap] Implement trap --remove for YSH
c486d1edc [builtin/trap] YSH trap is restricted with shopt --set simple_trap_builtin
3b2a80458 [builtin/trap] Implement trap --add { } for YSH
d0ed440bd [builtin/trap cleanup] Minor cleanup
658d32d35 [builtin/trap refactor] Minor simplifications
5f76c2ddd [pyext] Add F_DUPFD_CLOEXEC binding
3146d2925 [test/lint] Fix build
7792d8b49 [test/signal] Fix harness to show that OSH doesn't implement trap ''
70d0abd9b [spec/builtin-printf] Adjust allowed failures
c95b9c139 [regtest/aports] Archive distfiles for community
1771f2804 [regtest/aports] Update Oils tarball version
4b198591b [builtin/trap] Document and test behavior of trap --add { echo $x }
2dbd5787f [spec/builtin-trap] Add test case for trap state in subshells
9c011c844 [spec/divergence] Failing test case for !( parsing issue
6c080815d [regtest/aports] Add percentages in summary
ad3cb1e9d [spec/divergence] Test cases for closing (( and $(( - #2337
1688e86a2 [regtest/aports] Enhance the published.html
e8c88c33e [spec/divergence] Try to confuse other shells with nested ))
66557ed11 [builtin/alias] Fix error locations
bf18bec21 [spec/paren-ambiguity] Add exact cases from regtest/aports, move to new file
ff20d3429 [test/spec] Able to run spec tests with zsh-5.9
43b3efbc4 [regtest/aports] Make tarball ID a param
2f081fb16 [doc/ref] Document $[] and @[] in expressions
f8b505953 [builtin/printf] Test handling of space in values given to %d %x etc.
59ac1aff9 [builtin/printf] Simplify code even more
45e768573 [build] Break dependency _build/oils.sh -> build/dev-shell.sh
6a621e6a6 [build cleanup] Remove mentions of oil_DEPS
7fae688c2 [lexer refactor] Inline rules that are now only used once
afc70a261 [cleanup] Rename Oil -> Oils (or OSH / YSH)
2f3030a7e [regtest/aports] Instructions for editing causes
d9109a996 [cleanup] Remove remnants of the TokenTooLong check
ad9cca972 [doc] Simplify README, emphasizing dev build vs. release build
b759afdbb [soil] Remove .circleci
6b3322c5b [devtools] Fix type checking of mycpp after wedge changes
4378d5cdd [spec/zsh-assoc] Start using zsh 5.9
48f8a3776 [build/deps] install-wedges uses the new ../oils.DEPS layout
0a22dcc65 [soil/dev-setup-*] Test the new ../oils.DEPS wedges
c97746aa2 [spec/builtin-kill] Disable mksh in flaky test
e65eaff5a [regest/aports] Publish reports with updated causes
83507f6bd [regtest/aports] Publish new report
851ac1ae4 [regtest/aports] Fix patch files
399c66bc1 [regtest/aports] Published new reports, with new causes
6298b8623 [regtest/aports] Fix log truncation bug; update summary HTML
db0ab1449 [regtest/aports] Show table of common causes
09e327bd9 [web] CSS fix for iPad
a9577ef07 [regest/aports] Demo of using 'taskset' to pin package builds
1f33b1fd0 [regtest/aports] Add more_abuild_flags to regtest/aports-guest.sh
4eed22b23 [regtest/aports] Build packages in parallel
dfb93257c [regtest/aports] Change timeout from 5 minutes -> 15
1dd8b98c9 [regtest/aports] Run with 2 CPUs per job
094ba0fb9 [mycpp] Assert the number of type errors, so we don't regress
4c4baa80b [regtest/aports] Put timeout disagreements toward the top of the report
252d42b92 [regtest/aports] Try oversubscribing each core
62783bb98 [regtest/aports] Save portions of /proc/{stat,vmstat}
80d83596a [regtest/aports cleanup] Remove syntax that OPy doesn't understand
e5748e0e2 [regtest/aports] Save /proc/{meminfo,diskstats} too
d8987bc08 [regtest/aports] Rename stat_log.py -> proc_log.py
6a2a73a3e [regtest/aports] Turn on abuild -k -K
df5128d36 [regtest/aports] Suppress 'find' failure
e57688a41 [regtest/aports fix] Clean up _chroot/packager-layers before starting
568316fdd [regtest/aports] Fix the case where we don't have $XARGS_SLOT
9830859de [spec/divergence] Failing test cases for ( = ) and ( == )
2a50eef6f [spec/arith-context] Minor cleanup
b4af2f739 [builtin/test] Fix 2 parsing bugs
e023633e1 [test/spec refactor] Move test cases into new file spec/bool-parse
9a7092cf4 [regtest] Add experimental plot.sh script
82f580f3a [regtest/aports] Found that we have a lot of idle time
2f6a1bddd [regtest/aports] Remove obsolete code
7321bbbf6 [spec/bugs] Failing test case for $[] case bug
d9b198097 [frontend] Fix abbreviation of command.Simple
bd011511d [core rename] Rename UserExit -> HardExit
ea293a1cb [build/deps] Check for failure in install-wedges
881776ea8 [soil] Fix the list of jobs that maybe-merge knows about
a9e1764bc [release] Bump version to 0.37.0
962a832fa [release] Fixes and shell functions for Oils 0.37.0
af6cc6b19 [mycpp build] Allow nosouffle variant to be built without _bin/datalog
e487951a2 [doc/ref] Describe what the syntax of the kill builtin is (#2509)

Conclusion

OSH was already the most bash -compatible shell, and we're making it even better, guided by the regtest/aports harness.

There is interest in regtest/debian , and translating Oils to Rust. But we will need help . It's definitely possible, but there's a lot of deep technical work.

We're trying to make the project more collaborative, e.g. by writing a dev guide. If you want to help, the best way is to join Zulip and see if say 20% of the threads make sense. Please ask questions!

Let me know what you think about all this in the comments !


Finally, this discussion may give you a sense of the bigger picture:

My comment on

If you could redesign Linux userland from scratch, what would you do differently? (ask)
96 points, 206 comments on 2025-10-18

Appendix: Metrics for the 0.37.0 Release

Let's review release metrics, which help me keep track of the project. The last review was in September , for Oils 0.35.0.

Docs

Whenever we implement a new feature in OSH or YSH , we update the Oils Reference .

I also improved several topics, and automatically checked code examples for accuracy.

Spec Tests

OSH made great progress, with 92 new passing spec tests.

They all pass in C++ as well:


YSH has 13 more tests passing:

They all pass in C++ as well:

Benchmarks

This speed up is due to re-enabling the Souffle GC optimizations:

Memory usage went down slightly:

Runtime

This speed up is also due to re-enabling the Souffle GC optimizations:

I mentioned in the previous post that our first attempt at the builtin cat optimization wasn't sound. So we no longer beat bash by a significant margin on the autotools configure workload:

Code Size

Even though many new tests pass, our code is still short:

There are fewer lines of code, due to the Souffle GC optimizations:

And fewer bytes of compiled code:

Appendix: Docker -> Podman Migration

I also mentioned in the last post that we finally switched from Docker to podman ! That's something I've wanted to do for awhile.

What I learned is that OCI containers are pretty tightly coupled the OS underneath. For example, by default, Debian uses an inefficient vfs storage driver, now the OverlayFS driver!

So here are the commits related to that:

487d804e1 [deps] Able to build zsh 5.9 wedge
f7493912a [deps] Test rebuilding all wedges inside a container (boxed)
c5d3741e9 [deps] Introduce new 'wedge boxed-2025' command, build all wedges
4575cce87 [build/deps] Change boxed dir to _build/boxed/wedge
721e3df6c [deps/wedge] Fix typo
8149ccfda [deps] Rebuild soil-bloaty image with new wedges
fc06038e2 [deps] Rebuild soil-{benchmarks,benchmarks2}
8134e7674 [deps] Working on new location 'unboxed' wedges
d33bf4b26 [deps] Got new 'unboxed' builds working in ../oils.DEPS/wedge
5f5f9a4c7 [build/deps] Remove unused code
e860838dd [deps] Fix mkdir /wedge bug
7b6bc410a [deps] Fix py3-libs directory
d7d68278e [deps] Account for py3-libs special case
ea6d86342 [deps] Re-build and deploy soil-benchmarks2
5bc5b4f26 [deps] Rebuild soil-other-tests, soil-bloaty
ab61a7d22 [deps] Rebuild soil-app-tests
8b5f4690b [deps] Rebuild soil-wild, soil-pea, soil-dev-minimal
087fe4ae5 [deps] Rebuild soil-cpp-small
c7a281ac4 [deps] Rebuild soil-cpp-spec
95304d1b5 [deps] Rebuild soil-{clang,benchmarks,ovm-tarball,dummy}
2c12ca50d [soil] Rebuilt all images
2b4845ae9 [soil] Revert image version to fix build
25cc5c884 [deps] Rebuild base image soil-debian-12 with new wedges
8d281eb1d [deps] Code cleanup, and rebuild all images
6a791c9a4 [build] Move old wedge paths to a separate file
b7f75beec [cleanup] Remove build/dev-shell.sh in a few places
3f6ced711 [build] benchmarks/report.sh uses new dev-shell.sh convention
3eac551aa [build] Move old code into old-wedges.sh
b188ba326 [build refactor] Move more old code in to old-wedges.sh
1e8d2e478 [deps] Improve automation for full-soil-rebuild
a1216af92 [deps] Add zsh 5.9 to soil-ovm-tarball
299843317 [deps] Install jq and procps with wedge-deps-debian
7be45d3d6 [build cleanup] Remove old wedge dirs from _bin/shwrap/pea_main
41572f744 [soil] Migrate all containers to podman
98d06d966 [deps refactor] Move code to new scripts
9a4feb8fc [deps] Rebuild soil-{dummy,pea} with podman
e68044193 [deps] Rebuilt ALL images with podman
09d55af41 [deps] Change the default from docker to podman

The Code That Revolutionized Orbital Simulation

Lobsters
www.youtube.com
2025-12-11 22:16:48
Comments...

The Star-Studded Fight for the Immortal Soul of an Upper West Side Church Rages On

hellgate
hellgatenyc.com
2025-12-11 22:02:45
Representatives for the 12 remaining congregants of West Park Presbyterian Church are appealing to the Landmarks Preservation Commission to allow them to remove their landmark status and sell the building—but some famous New Yorkers stand in their way....
Original Article

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

Hackers exploit Gladinet CentreStack cryptographic flaw in RCE attacks

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 21:49:10
Hackers are exploiting a new, undocumented vulnerability in the implementation of the cryptographic algorithm present in Gladinet's CentreStack and Triofox products for secure remote file access and sharing. [...]...
Original Article

Hackers exploit Gladinet CentreStack cryptographic flaw in RCE attacks

Hackers are exploiting a new, undocumented vulnerability in the implementation of the cryptographic algorithm present in Gladinet's CentreStack and Triofox products for secure remote file access and sharing.

By leveraging the security issue, the attackers can obtain hardcoded cryptographic keys and achieve remote code execution, researchers warn.

Although the new cryptographic vulnerability does not have an official identifier, Gladinet notified customers about it and advised them to update the products to the latest version, which, at the time of the communication, had been released on November 29.

The company also provided customers with a set of indicators of compromise (IoCs), indicating that the issue was being exploited in the wild.

Security researchers at managed cybersecurity platform Huntress are aware of at least nine organizations targeted in attacks leveraging the new vulnerability along with an older one tracked as CVE-2025-30406 - a local file inclusion flaw that allows a local attacker to access system files without authentication.

Hardcoded cryptographic keys

Using the IoCs from Gladinet, Huntress researchers were able to determine where the flaw was and how threat actors are leveraging it.

Huntress found that the issue stems from the custom implementation of the AES cryptographic algorithm in Gladinet CentreStack and Triofox, where the encryption key and Initialization Vector (IV) were hardcoded inside the GladCtrl64.dll file and could be easily obtained.

Specifically, the key values were derived from two static 100-byte strings of Chinese and Japanese text, which were identical across all product installations.

The flaw lies in the processing of the ‘ filesvr.dn ’ handler, which decrypts the ‘ t ’ parameter (Access Ticket) using those static keys, Huntress explains .

Anyone extracting those keys could decrypt the Access Tickets containing file paths, usernames, passwords, and timestamps, or create their own to impersonate users and instruct servers to return any file on the disk.

“Because these keys never change, we could extract them from memory once and use them to decrypt any ticket generated by the server or worse, encrypt our own,” the researchers say.

Huntress observed that Access Tickets were forged using hardcoded AES keys and setting the timestamp to year 9999, so the ticket never expires.

The attackers next requested the server’s web.config file. Since it contains the machineKey , they were able to use it to trigger remote code execution through a ViewState deserialization flaw.

Exploitation activity
Exploitation activity
Source: Huntress

Besides an attacking IP address, 147.124.216[.]205, no specific attribution has been made for those attacks.

Regarding the targets, Huntress confirmed nine organizations as of December 10, from various sectors, including healthcare and technology.

Users of Gladinet CentreStack and Triofox are recommended to upgrade to version 16.12.10420.56791 (released on December 8) as soon as possible and also rotate the machine keys.

Additionally, it is recommended to scan logs for the ‘ vghpI7EToZUDIZDdprSubL3mTZ2 ’ string, which is associated with the encrypted file path, and is considered the only reliable indicator of compromise.

Huntress provides mitigation guidance in its report, along with indicators of compromise that defenders can use to protect their environments or determine if they were breached.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Adams Names Police Accountability Foe to Chair Police Accountability Agency

hellgate
hellgatenyc.com
2025-12-11 21:49:07
Former Post columnist Pat Smith has big plans to make police misconduct investigators start loving cops, but he may not be in his position for long....
Original Article
Adams Names Police Accountability Foe to Chair Police Accountability Agency
Newly installed Interim Chair of the Civilian Complaint Review Board Pat Smith (CCRB)

CCRB

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

23,746 Patients Died on Waitlists in Past Year

Hacker News
secondstreet.org
2025-12-11 21:45:03
Comments...
Original Article
  • Government data obtained through FOI shows 23,746 patients died on waitlists during the past fiscal year, bringing the total to over 100,000 since 2018

C anadian think tank SecondStreet.org released government data today showing at least 23,746 patients died on government waiting lists over the past fiscal year. The data was obtained through Freedom of Information (FOI) requests and covers a wide array of services – heart surgery, hip operations, MRI scans, etc.

“What’s really sad is that behind many of these figures are stories of patients suffering during their final years – grandparents who dealt with chronic pain while waiting for hip operations, people leaving children behind as they die waiting for heart operations, so much suffering,” said SecondStreet.org President Colin Craig. “It doesn’t have to be this way. If we copied better-performing European public health systems, we could greatly reduce patient suffering.”

The data covers the fiscal year April 1, 2024 – March 31, 2025. Highlights include:

  • At least 23,746 patients died in Canada while waiting for surgeries or diagnostic scans. This figure does not include Alberta and some parts of Manitoba, while some health bodies only had data on surgeries, not diagnostic scans. Most provinces have no data on patients dying while waiting for specialist appointments;
  • Comparing data from health care bodies that provided information for both this year and last year shows a 3% increase in waiting list deaths. Patients died after waiting anywhere from less than a week to nearly nine years;
  • New data from Ontario Health suggests 355 patients died while waiting for cardiac surgery or a cardiac procedure. While many cases did not include target wait times for providing treatment, there were at least 90 cases where patients died after waiting past targets that were stated or after waiting more than 90 days; and
  • Since April 2018, SecondStreet.org has gathered government data showing more than 100,876 cases where Canadians died while waiting for care. In previous years, large portions of data were missing, so the total is likely much higher.

“It’s interesting that governments will regularly inspect restaurants and report publicly if there’s a minor problem such as a missing paper towel holder,” added Craig. “Meanwhile, no government reports publicly on patients dying on waiting lists. It’s quite hypocritical.”

To view the report – click here

To view the government FOI responses, see below.

Two new RSC protocol vulnerabilities uncovered

Hacker News
nextjs.org
2025-12-11 21:37:58
Comments...
Original Article

Note: Some patched versions are still being released to npm. If a version listed below is not yet available, please check back shortly.

Two additional vulnerabilities have been identified in the React Server Components (RSC) protocol. These issues were discovered while security researchers examined the patches for React2Shell . Importantly, neither of these new issues allow for Remote Code Execution. The patch for React2Shell remains fully effective.

These vulnerabilities originate in the upstream React implementation ( CVE-2025-55183 , CVE-2025-55184 ). This advisory tracks the downstream impact on Next.js applications using the App Router. For full details, see the React blog post .

Impact

Denial of Service: CVE-2025-55184 (High Severity)

A specifically crafted HTTP request can be sent to any App Router endpoint that, when deserialized, can cause an infinite loop that hangs the server process and prevents future HTTP requests from being served.

Source Code Exposure: CVE-2025-55183 (Medium Severity)

A specifically crafted HTTP request can cause a Server Function to return the compiled source code of other Server Functions in your application. This could reveal business logic. Secrets could also be exposed if they are defined directly in your code (rather than accessed via environment variables at runtime) and referenced within a Server Function. Depending on your bundler configuration, these values may be inlined into the compiled function output.

Affected and Fixed Next.js Versions

Applications using React Server Components with the App Router are affected. The table below shows which versions are affected by each vulnerability and the corresponding fix:

Version DoS (CVE-2025-55184) Source Code Exposure (CVE-2025-55183) Fixed In
>=13.3 Upgrade to 14.2.34
14.x 14.2.34
15.0.x 15.0.6
15.1.x 15.1.10
15.2.x 15.2.7
15.3.x 15.3.7
15.4.x 15.4.9
15.5.x 15.5.8
15.x canary 15.6.0-canary.59
16.0.x 16.0.9
16.x canary 16.1.0-canary.17

Pages Router applications are not affected, but we still recommend upgrading to a patched version.

Required Action

All users should upgrade to the latest patched version in their release line:

If you are on Next.js >=13.3, 14.0.x, or 14.1.x, upgrade to the latest 14.2.x release.

There is no workaround. Upgrading to a patched version is required.

Resources

Discovery

Thank you to RyotaK from GMO Flatt Security Inc. and Andrew MacPherson for discovering and responsibly disclosing these vulnerabilities. We are intentionally limiting technical detail in this advisory to protect developers who have not yet upgraded.

React2Shell and related RSC vulnerabilities threat brief

Hacker News
blog.cloudflare.com
2025-12-11 21:36:56
Comments...
Original Article

2025-12-11

7 min read

On December 3, 2025, immediately following the public disclosure of the critical, maximum-severity React2Shell vulnerability (CVE-2025-55182), the Cloudforce One Threat Intelligence team began monitoring for early signs of exploitation. Within hours, we observed scanning and active exploitation attempts, including traffic originating from infrastructure associated with Asian-nexus threat groups.

Early activity indicates that threat actors quickly integrated this vulnerability into their scanning and reconnaissance routines. We observed systematic probing of exposed systems, testing for the flaw at scale, and incorporating it into broader sweeps of Internet‑facing assets. The identified behavior reveals the actors relied on a combination of tools, such as standard vulnerability scanners and publicly accessible Internet asset discovery platforms, to find potentially vulnerable React Server Components (RSC) deployments exposed to the Internet.

Patterns in observed threat activity also suggest that the actors focused on identifying specific application metadata — such as icon hashes, SSL certificate details, or geographic region identifiers — to refine their candidate target lists before attempting exploitation.

In addition to React2Shell, two additional vulnerabilities affecting specific RSC implementations were disclosed: CVE-2025-55183 and CVE-2025-55184. Both vulnerabilities, while distinct from React2Shell, also relate to RSC payload handling and Server Function semantics, and are described in more detail below.

Background: React2Shell vulnerability (CVE-2025-55182)

On December 3, 2025, the React Team disclosed a Remote Code Execution (RCE) vulnerability affecting servers using the React Server Components (RSC) Flight protocol. The vulnerability, CVE-2025-55182 , received a CVSS score of 10.0 and has been informally referred to as React2Shell.

The underlying cause of the vulnerability is an unsafe deserialization flaw in the RSC Flight data-handling logic. When a server processes attacker-controlled payloads without proper validation, it becomes possible to influence server-side execution flow. In this case, crafted input allows an attacker to inject logic that the server interprets in a privileged context.

Exploitation is straightforward. A single, specially crafted HTTP request is sufficient; there is no authentication requirement, user interaction, or elevated permissions involved. Once successful, the attacker can execute arbitrary, privileged JavaScript on the affected server.

This combination of authenticated access, trivial exploitation, and full code execution is what places CVE-2025-55182 at the highest severity level and makes it significant for organizations relying on vulnerable versions of React Server Components.

In response, Cloudflare has deployed new rules across its network, with the default action set to Block. These new protections are included in both the Cloudflare Free Managed Ruleset (available to all Free customers) and the standard Cloudflare Managed Ruleset (available to all paying customers), as detailed below. More information about the different rulesets can be found in our documentation .

CVE

Description

Cloudflare WAF Rule ID

CVE-2025-55182

React - RCE

Rules to mitigate React2Shell Exploit

Paid: 33aa8a8a948b48b28d40450c5fb92fba

Free: 2b5d06e34a814a889bee9a0699702280

CVE-2025-55182 - 2

React - RCE Bypass

Additional rules to mitigate exploit bypass

Paid: bc1aee59731c488ca8b5314615fce168

Free: cbdd3f48396e4b7389d6efd174746aff

CVE-2025-55182

Scanner Detection

Additional paid WAF rule to catch React2Shell scanning attempts

Paid: 1d54691cb822465183cb49e2f562cf5c

Recently disclosed RSC vulnerabilities

In addition to React2Shell, two additional vulnerabilities affecting specific RSC implementations were disclosed. The two vulnerabilities, while distinct from React2Shell, also relate to RSC payload handling and Server Function semantics, with corresponding Cloudflare protections noted below:

CVE

Description

Cloudflare WAF Rule ID

CVE-2025-55183

Leaking Server Functions

In deployments where Server Function identifiers are insufficiently validated, an attacker may force the server into returning the source body of a referenced function

Paid: 17c5123f1ac049818765ebf2fefb4e9b Free: 3114709a3c3b4e3685052c7b251e86aa

CVE-2025-55184

React Function DoS

A crafted RSC Flight Payload containing cyclical Promise references can trigger unbounded recursion or event-loop lockups under certain server configurations, resulting in denial-of-service conditions

Paid: 2694f1610c0b471393b21aef102ec699

Investigation of early scanning and exploitation

The following analysis details the initial wave of activity observed by Cloudforce One, focusing on threat actor attempts to scan for and exploit the React2Shell vulnerability. While these findings represent activity immediately following the vulnerability's release, and were focused on known threat actors, it is critical to note that the volume and scope of related threat activity have expanded dramatically since these first observations.

Tactics

Unsurprisingly, the threat actors were relying heavily on publicly available, commercial, and a variety of other tools to identify vulnerable servers:

  • Vulnerability intelligence : The actors leveraged vulnerability intelligence databases that aggregated CVEs, advisories, and exploits for tracking and prioritization.

  • Vulnerability reconnaissance : The actors conducted searches using large-scale reconnaissance services, indicating they are relying on Internet-wide scanning and asset discovery platforms to find exposed systems running React App or RSC components. They also made use of tools that identify the software stack and technologies used by websites.

  • Vulnerability scanning : Activity included use of Nuclei (User-Agent: Nuclei - CVE-2025-55182 ), a popular rapid scanning tool used to deploy YAML-based templates to check for vulnerabilities. The actors were also observed using a highly likely React2Shell scanner associated with the User-Agent " Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 React2ShellScanner/1.0.0 ".

  • Vulnerability exploitation : The actors made use of Burp Suite, a web application security testing platform for identifying and exploiting vulnerabilities in HTTP/S traffic.

Techniques

Recon via Internet-wide scanning and asset discovery platform
To enumerate potential React2Shell targets, the actors leveraged an Internet-wide scanning and asset-discovery platform commonly used to fingerprint web technologies at scale. Their queries demonstrated a targeted effort to isolate React and Next.js applications — two frameworks directly relevant to the vulnerability — by searching for React-specific icon hashes, framework-associated metadata, and page titles containing React-related keywords. This approach likely allowed them to rapidly build an inventory of exploitable hosts before initiating more direct probing.

Targeting enumeration and filtering
During their reconnaissance phase, the operators applied additional filtering logic to refine their target set and minimize noise. Notably, they excluded Chinese IP space from their searches, indicating that their enumeration workflow intentionally avoided collecting data on possibly domestic infrastructure. They also constrained scanning to specific geographic regions and national networks to identify likely high-value hosts. Beyond basic fingerprinting, the actors leveraged SSL certificate attributes — including issuer details, subject fields, and top-level domains — to surface entities of interest, such as government or critical-infrastructure systems using .gov or other restricted TLDs. This combination of geographic filtering and certificate-based pivoting enabled a more precise enumeration process that prioritized strategically relevant and potentially vulnerable high-value targets.

Preliminary target analysis
Observed activity reflected a clear focus on strategically significant organizations across multiple regions. Their highest-density probing occurred against networks in Taiwan, Xinjiang Uygur, Vietnam, Japan, and New Zealand — regions frequently associated with geopolitical intelligence collection priorities. Other selective targeting was also observed against entities across the globe, including government (.gov) websites, academic research institutions, and critical‑infrastructure operators. These infrastructure operators specifically included a national authority responsible for the import and export of uranium, rare metals, and nuclear fuel.

The actors also prioritized high‑sensitivity technology targets such as enterprise password managers and secure‑vault services, likely due to their potential to provide downstream access to broader organizational credentials and secrets.

Additionally, the campaign targeted edge‑facing SSL VPN appliances whose administrative interfaces may incorporate React-based components, suggesting the actor sought to exploit React2Shell against both traditional web applications and embedded web management frameworks in order to maximize access opportunities.

Early threat actor observations
Cloudforce One analysis confirms that early scanning and exploitation attempts originated from IP addresses previously associated with multiple Asia-affiliated threat actor clusters.  While not all observed IP addresses belong to a single operator, the simultaneous activity suggests shared tooling, infrastructure, or experimentation in parallel among groups with a common purpose and shared targeting objectives. Observed targeting enumeration and filtering (e.g. a focus on Taiwan and Xinjiang Uygur, but exclusion of China), as well as heavy use of certain scanning and asset discovery platforms, suggest general attribution to Asia-linked threat actors.

Cloudflare’s Managed Rulesets for React2Shell began detecting significant activity within hours of the vulnerability’s disclosure. The graph below shows the daily hit count across the two exploit-related React2Shell WAF rules.

BLOG-3096 2

Aggregate rule hit volume over time

The React2Shell disclosure triggered a surge of opportunistic scanning and exploit behavior. In total, from 2025-12-03 00:00 UTC to 2025-12-11 17:00UTC, we received 582.10M hits. That equates to an average of 3.49M hits per hour, with a maximum number of hits in a single hour reaching 12.72M. The average unique IP count per hour was 3,598, with the maximum number of IPs in an hour being 16,585.

BLOG-3096 3

Hourly count of unique IPs sending React2Shell-related probes

Our data also shows distinct peaks above 6,387 User-Agents per hour, indicating a heterogeneous mix of tools and frameworks in use, with the average number of unique User-Agents per hour being 2,255. The below graph shows exploit attempts based on WAF rules (Free and Managed) triggering on matching payloads:

BLOG-3096 4

Unique User-Agent strings used in React2Shell-related requests

To better understand the types of automated tools probing for React2Shell exposure, Cloudflare analyzed the User-Agent strings associated with React2Shell-related requests since December 3, 2025. The data shows a wide variety of scanning tools suggesting broad Internet-wide reconnaissance:

Top 10 User Agent strings by exploit attempts

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Assetnote/1.0.0

Block Security Team/Assetnote-HjJacErLyq2xFe01qaCM1yyzs

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 (GIS - AppSec Team - Project Vision)

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36

python-requests/2.32.5

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Assetnote/1.0.0 (ExposureScan)

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/142.0.0.0 Safari/537.36

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1 Safari/605.1.1

Payload variation and experimentation

Cloudflare analyzed the payload sizes associated with requests triggering React2Shell-related detection rules. The long-tailed distribution — dominated by sub-kilobyte probes, but punctured by extremely large outliers — suggest actors are testing a wide range of payload sizes:

Metric

Value

Maximum payload size

375 MB

Average payload size

3.2 KB

p25 (25th Percentile)

703 B

p75 (75th Percentile)

818 B

p90 (90th Percentile)

2.7 KB

p99 (99th Percentile)

66.5 KB

Standard deviation

330 KB

Additional React vulnerabilities identified

In parallel with our ongoing analysis of the React2Shell vulnerability, two additional vulnerabilities affecting React Server Components (RSC) implementations have been identified:

1. React function DoS

The vulnerability CVE-2025-55184 was recently disclosed, revealing that React Server Component frameworks can be forced into a Node.js state where the runtime unwraps an infinite recursion of nested Promises.

This behavior:

  • Freezes the server indefinitely

  • Prevents yielding back to the event loop

  • Effectively takes the server offline

  • Does not require any specific Server Action usage — merely the presence of a server capable of processing an RSC Server Action payload

The trigger condition is a cyclic promise reference inside the RSC payload.

2. Leaking server functions

Another vulnerability, CVE-2025-55183 , was also recently disclosed, revealing that certain React Server Component frameworks can leak server-only source code under specific conditions.

If an attacker gains access to a Server Function that:

  • Accepts an argument that undergoes string coercion, and

  • Does not validate that the argument is of an expected primitive type

then the attacker can coerce that argument into a reference to a different Server Function. The coerced value’s toString() output causes the server to return the source code of the referenced Server Function.

How Cloudflare is protecting customers

Cloudflare’s protection strategy is multi-layered, relying on both the inherent security model of its platform and immediate, proactive updates to its Web Application Firewall (WAF).

  • Cloudflare Workers: React-based applications and frameworks deployed on Cloudflare Workers are inherently immune. The Workers security model prevents exploits from succeeding at the runtime layer, regardless of the malicious payload.

  • Proactive WAF deployment: Cloudflare urgently deployed WAF rules to detect and block traffic proxied through its network related to React2Shell and the recently disclosed RSC vulnerabilities.

The Cloudflare security team continues to monitor for additional attack variations and will update protections as necessary to maintain continuous security for all proxied traffic.

Continuous monitoring

While Cloudflare's emergency actions — the WAF limit increase and immediate rule deployment — have successfully mitigated the current wave of exploitation attempts, this vulnerability represents a persistent and evolving threat. The immediate weaponization of CVE-2025-55182 by sophisticated threat actors underscores the need for continuous defense.

Cloudflare remains committed to continuous surveillance for emerging exploit variants and refinement of WAF rules to detect evasive techniques. However, network-level protection is not a substitute for remediation at the source. Organizations must prioritize immediate patching of all affected React and Next.js assets. This combination of platform-level WAF defense and immediate application patching remains the only reliable strategy against this critical threat.

Indicators of Compromise

Tool/Scanner

User Agent String

Observation/Purpose

Nuclei

Nuclei - CVE-2025-55182

User-Agent for rapid, template-based scanning for React2Shell vulnerability

React2ShellScanner

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 React2ShellScanner/1.0.0

User-Agent for a likely custom React2Shell vulnerability scanner

Cloudflare's connectivity cloud protects entire corporate networks , helps customers build Internet-scale applications efficiently , accelerates any website or Internet application , wards off DDoS attacks , keeps hackers at bay , and can help you on your journey to Zero Trust .

Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.

To learn more about our mission to help build a better Internet, start here . If you're looking for a new career direction, check out our open positions .

Vulnerabilities Threat Intelligence Research

Powder and Stone. Or, Why Medieval Rulers Loved Castles

Hacker News
1517.substack.com
2025-12-11 21:35:52
Comments...
Original Article

Thesis: Castles created the modern state. Part 1 of the series explores the technology and why it was so important to medieval Europeans. It answers, “But why castles?”

    

In any specific action, in any measure we may undertake, we always have the choice between the most audacious and the most careful solution. Some people think that the theory of war always advises the latter. That assumption is false. If the theory does advise anything, it is the nature of war to advise the most decisive, that is, the most audacious . Theory leaves it to the military leader, however, to act according to their own courage, according to their spirit of enterprise, and their self-confidence. Make your choice , therefore, according to this inner force; but never forget that no military leader has ever become great without audacity.

Clausewitz, Carl. “Principles of War.” Military Service Publishing Company, 1942. Accessible here

Medieval rulers would have heartily disagreed with von Clausewitz. Their constraints and dogma would have led them to say, “why not both?” Why not have your cake and eat it too? The most careful solution and the most daring one at the same time?

What if you could have defense and offense in a single, elegant solution?

     

Historians have wondered for a long time where lords acquired their powers of justice and command. […] Others have seen the origin of seignorial power in the droit de ban - the right to command, coerce, and punish originally delegated.by the king to his officers and then increasingly appropriated by them. […] Current research leads us, to think that without the instrument represented by the motte and bailey castle , the final appropriation of the droit de ban by the king’s officers or the usurpation of this power by the wealthiest landowners would never have taken place . The motte accelerated the tipping of the balance of power toward the seignory. […] The most important factor was the lordship inherent in possession of a castle; the motte and bailey castle crystallized power and in some cases even created it.

— Bur, Michel. “The Motte and Bailey Castle: Instrument of Revolution.” Engineering and Science 45, no. 3 (1982): 11-14.

As with old music, there’s a selection bias in the castles we remember. In our collective consciousness, we associate the word castle with beautiful buildings like these,

Buildings exemplary both in history and their beauty. But they’re the exceptions, not the rule.

Early castles were stark in their simplicity. They had to be because of their strategic role in capturing territory. Soldiers would arrive onto foreign land, temporarily secure it, create a castle, 1 leave a garrison, and then move on. It has been suggested by some historians that between the start of William the Conqueror’s invasion in 1066 and his death in 1087, his army had built at least 500 castles across England. 2 Somewhere between 500 to 550 of these were occupied at the time of his death. 3

The overwhelming majority of these castles were cheap and disposable . Dollar store castling. Affordable earthen mounds and wooden walls — within reach for even the most impoverished of feudal lords. Typically taking the motte-and-bailey form,

WordPress.com | Castle art, Medieval castle, Motte and bailey castle
An actual motte-and-bailey

When medieval rulers c. 900 AD built castles, they mostly built motte-and-baileys. And they did so in astonishingly scrappy ways. More generally, early medieval rulers built, used and abused castles in radically different ways than the pop-culture, Disney movie version of castling.

    

I worked with an amateur archaeologist and trained architect, J. Lyonsmith, who studies historical European martial arts and the way in which Europe once waged war. Using archaeological data, dig findings, and a 3D photogrammetric map, we studied a small motte-and-bailey, Castle Pulverbatch in Shropshire, England.

Pulverbatch is an exceptionally preserved, entirely wooden motte-and-bailey, offering us a great example of what it could have been like to build one of these castles. It’s hard to do the site justice with photos alone, via the 3D aerial survey map ,

Pulverbatch shows one of the most counter-intuitive things about ancient warfare; they loved to dig build. The Romans would set up palisades and fortifications at every given opportunity. They were so prolific that they exported their techniques to their enemies;

Disappointed in this hope, the Nervii surround the winter-quarters with a rampart eleven feet high , and a ditch thirteen feet in depth . These military works they had learned from our men in the intercourse of former years, and, having taken some of our army prisoners, were instructed by them: but, as they had no supply of iron tools which are requisite for this service, they were forced to cut the turf with their swords , and to empty out the earth with their hands and cloaks , from which circumstance, the vast number of the men could be inferred; for in less than three hours they completed a fortification of ten miles in circumference ; and during the rest of the days they began to prepare and construct towers of the height of the ramparts , and grappling irons, and mantelets, which the same prisoners had taught them.

— Caesar, Julius. “Commentarii de bello Gallico” c. 49 B.C. Translated & Re-published as “The Gallic Wars.” by W. A. McDevitte and W. S. Bohn, Harper & Brothers, 1869. Accessible Here. (Caesar 5:42)

Even when deep in enemy territory, a Roman legion on the march would set up a fortified, highly-organized camp by the end of the day. Their carpenters and military engineers could create a cheap, disposable fort within hours under enemy fire. Much like cats, the Romans (and William the Conqueror) would find a spot that fits and would sit (behind a ditch with fortified wooden walls).

Very little evidence remains that tells us exactly how Pulverbatch was built nearly a thousand years ago. Thanks to archaeologists, we know a lot about the era’s construction techniques, but we don’t have a Castle Pulverbatch “recipe.” Or, a comprehensive set of techniques that would let us faithfully recreate their work with era-appropriate tools. 4 But what we do have is a beautifully embroidered, descriptive tapestry — the Bayeux Tapestry,

Legend has it that the tapestry was commissioned and created by William the Conqueror’s wife Queen Matilda and her ladies in waiting. Others contend that it was commissioned by his half brother, Odo – who was a bishop at Bayeux. But no matter who commissioned it, its beauty and utility remain the same. The tapestry portrays the most crucial ingredient of the Pulverbatch-era motte-and-bailey recipe – mechanically stabilized earth,

Or, in more detail,

Ian Bell’s drawings of Steven Bassett’s excavations 1972-1981 of Pleshey Castle, Essex via Castles Studies Trust

Ignoring the palisade and subsequent construction (that made the castle a castle), we can simplify and describe the initial process as,

  • Establish the castle’s perimeter

  • Dig a ditch along the perimeter

  • Move the dirt to one end of the site

  • Layer the dirt as you pile it and shape it into a mound

  • Put clay along the exterior of the mound.

This recipe was simple enough and castles were important enough that medieval rulers would have their their soldiers (and subjects), to quote Caesar, “cut the turf with their swords, and to empty out the earth with their hands and cloaks.” The initial simplicity of motte-and-baileys allowed armies to brute force security.

There is a case to be made that they were (maybe) too simple. There was – apparently – a persistent problem of “illegal castling.” During a civil war or succession crisis, the disputing parties would start by building castles in disputed territory,

A similar phenomenon of “illegal” castle building occurred in times of crisis , such as succession or wardship, particularly around the middle of the 11th century in territories that were otherwise well under control. When the ruler recovered his power, he usually preferred to formalize the status quo rather than start a war with the new castle owners who had appeared during the crisis. In the long run, however, even these illegal castles usually ended up acquiring legality by agreement between parties. Those that stayed totally independent were very rare.

— Bur, Michel. “The Motte and Bailey Castle: Instrument of Revolution.” Engineering and Science 45, no. 3 (1982): 11-14.

All of the other trappings of castling that we see in classic diagrams of motte-and-baileys – the palisade, stone keep, draw bridge etc. – would usually be added over time. Doing a motte-and-bailey was – for the most part – an exercise in shoveling dirt more efficiently than the enemy.

Beyond the obvious defensive benefits, it’s worth asking – why? Why did they do this? And why was it so effective?

    

To understand why castles matter, we really need to understand wagons. The Tyranny of the Wagon Equation is my favorite post on this website. I had never given it much thought before, but it turns out that medieval logistics share a lot in common with the rocket equation. Roughly;

To get X (armies, explorers, astronauts, etc.) from A to B, you need a Resource R which is also a resource that you want to deliver at the end (food for the army, fuel for the explorers to explore with). If the thing (donkeys, horses, hydrazine, RP-1) that pushes X also consumes R, then the amount of R needed to get from A to B grows non-linearly as the number X increases. 5

In the post, the author shares an amazing series of diagrams showing what the wagon equation leads to in the real world,

You may think this diagram is merely a fellow nerd’s conceit, but for most militaries, it is a force as fundamental as gravity. When Argentina invaded the Falklands in 1982, a part of the UK’s answer was this diagram,

https://www.rafmuseum.org.uk/app/uploads/2022/06/Vulcan-refuelling-plan.jpg
The diagram describes the in-air refueling chain for Operation Black Buck. The diagram was obtained from a blog post on the RAF Museum’s website .

It is the wagon equation in tangible form. Planes refuelling other planes to refuel planes. The long-term solution to this would be establishing bases at islands along the way, sending fuel to these islands, and then sending planes that “island hop” until they reach their destination. The US pursued this strategy during WW2 to fight the war in the Pacific.

We had to fly those planes from the bases in Kansas to India. Then we had to fly fuel over the hump into China. [...] We were supposed to take these B-29s—there were no tanker aircraft there. We were to fill them with fuel, fly from India to Chengtu; offload the fuel; fly back to India; make enough missions to build up fuel in Chengtu; fly to Yawata, Japan; bomb the steel mills; and go back to India. We had so little training on this problem of maximizing [fuel] efficiency, we actually found to get some of the B-29s back instead of offloading fuel, they had to take it on. To make a long story short, it wasn’t worth a damn. And it was LeMay who really came to that conclusion, and led the Chiefs to move the whole thing to the Marianas, which devastated Japan.

— McNamara, Robert, “The Fog of War.” 2003.

The modern US Navy sidesteps this problem with a little help from a magical substance called Uranium. Modern US aircraft carriers operate for 25 years between refueling. 6 As of writing, there are multiple aircraft carriers in operation somewhere in the world that were refueled before some of this blog’s readers were born. Uranium and Plutonium are magic under other names.

But what if you don’t have a stockpile of Highly-Enriched Uranium and a fleet of multi-billion dollar aircraft carriers? Then your only recourse is a Forward Arming and Refueling Point (FARP) . Or some logistical combination of Forward Operating Bases, Forward Operating Sites, and Main Operating Bases. 7 Each base acts as a resource buffer in your supply chain – allowing your troops to go from link-to-link-to-link while re-supplying at each point. It’s the same strategy the US Army Air Forces used for operations in WW2, what pony express used to send mail, Edward I used for his invasion conquest of Wales, and the strategy Edward I’s distant descendants used to manage the British navy’s ever increasing need for coal. 8

Castles were the FARPs of the past, amongst other things,

The castle was also a storehouse for munitions, an advanced headquarters, an observation post in troubled areas, home of a lord, and a place where he could be secure from attacks by his enemies . Royal castles could in times of emergency act as havens for the king’s field army, or supply the men to raise a new army if the field army was defeated… [They were] not a place of refuge, but a centre of military power.

— Wise, Terence. “Medieval Warfare.” New York: Hastings House, 1976.

Castle garrisons would steal forage 9 from the land around them and create stockpiles of resources so that armies could use them as Forward Operating Bases/Sites when necessary. Each castle would be spaced one day’s march apart from the others, allowing armies to hop across the countryside. This strategy was central to medieval warfare. It’s why wars were fought castle-to-castle, siege-to-siege — disrupting this network was crucial for the enemy to gain territory.

When William I, or William the Conqueror, invaded England in the 11th Century, he used castles to secure territory. Lots and lots of castles. When his troops would first arrive at a new site, the first thing they would do would be to find a spot for a motte-and-bailey. Once a spot was identified, they would descend and build the castle as quickly as possible.

Orderic Vitalis gave the lack of castles in England as one reason for its defeat: ‘the fortifications [munitiones] called castles [castella] by the Normans were scarcely known in the English provinces , and so the English — could put up only a weak resistance to their enemies’.

— Turner, Ralph V. “Castles, Conquest and Charters: Collected Papers.” (1992).

As castles became more expensive and elaborate, they would require more than just troops in the field. They required masons, engineers, and castle specialists — expensive specialists. Regents would build castles until bankruptcy, take on debt, and then continue building more anyway. And it was a completely rational course of action at the time — castles were simply that important.

But what if the cornerstone of power in your society becomes obsolete? What if the strategy that has held true for over 500+ years 10 becomes irrelevant in an afternoon?

For nearly 500 years castles were a source of power for medieval societies, what happens then? Well… that’s for Part 2.

This series is the result of something I’ve been thinking about since the pandemic and is the synthesis of parts from several books + papers and discussions with friends. Sources include,

  • Bur, Michel. “The Motte and Bailey Castle: Instrument of Revolution.” Engineering and Science 45.3 (1982): 11-14. Accessible here.

  • Slavin, Philip. “Chicken husbandry in late-medieval eastern England: c. 1250–1400.” Anthropozoologica 44.2 (2009): 35-56. Accessible here.

  • Kelly, Jack. “Gunpowder: alchemy, bombards, and pyrotechnics: the history of the explosive that changed the world.” Basic Books (AZ). (2004).

  • Brauer, Jurgen, and Hubert van Tuyll. “Castles, Battles, & Bombs: How Economics Explains Military History.” (2008).

  • Granberry, G., et al. “The Age Of Gunpowder: An Era of Technological, Tactical, Strategic, and Leadership Innovations.” Emory Endeavors in History Series. (2014).

  • US Department of the Army. “The law of land warfare.” Field Manual 27-10 (1956).

LLMs lack a sense of taste so I don’t use them for writing. My writing style and approach is just too out there for most LLMs. I am (alas) an entirely carbon-based user of the em dash.

What I do use them for is augmenting my executive function and extracting data from digital copies of the books and papers I’ve read (doing it manually takes a lot of time).

Substack keeps bugging me if I don’t put a subscribe button, so here it is. Feel free to subscribe. Or not. I’m grateful that there’s anyone who reads my drivel at all. Thanks!

Show HN: Autofix Bot – Hybrid static analysis and AI code review agent

Hacker News
news.ycombinator.com
2025-12-11 21:24:34
Comments...
Original Article

Hi there, HN! We’re Jai and Sanket from DeepSource (YC W20), and today we’re launching Autofix Bot, a hybrid static analysis + AI agent purpose-built for in-the-loop use with AI coding agents.

AI coding agents have made code generation nearly free, and they’ve shifted the bottleneck to code review. Static-only analysis with a fixed set of checkers isn’t enough. LLM-only review has several limitations: non-deterministic across runs, low recall on security issues, expensive at scale, and a tendency to get ‘distracted’.

We spent the last 6 years building a deterministic, static-analysis-only code review product. Earlier this year, we started thinking about this problem from the ground up and realized that static analysis solves key blind spots of LLM-only reviews. Over the past six months, we built a new ‘hybrid’ agent loop that uses static analysis and frontier AI agents together to outperform both static-only and LLM-only tools in finding and fixing code quality and security issues. Today, we’re opening it up publicly.

Here’s how the hybrid architecture works:

- Static pass: 5,000+ deterministic checkers (code quality, security, performance) establish a high-precision baseline. A sub-agent suppresses context-specific false positives.

- AI review: The agent reviews code with static findings as anchors. Has access to AST, data-flow graphs, control-flow, import graphs as tools, not just grep and usual shell commands.

- Remediation: Sub-agents generate fixes. Static harness validates all edits before emitting a clean git patch.

Static solves key LLM problems: non-determinism across runs, low recall on security issues (LLMs get distracted by style), and cost (static narrowing reduces prompt size and tool calls).

On the OpenSSF CVE Benchmark [1] (200+ real JS/TS vulnerabilities), we hit 81.2% accuracy and 80.0% F1; vs Cursor Bugbot (74.5% accuracy, 77.42% F1), Claude Code (71.5% accuracy, 62.99% F1), CodeRabbit (59.4% accuracy, 36.19% F1), and Semgrep CE (56.9% accuracy, 38.26% F1). On secrets detection, 92.8% F1; vs Gitleaks (75.6%), detect-secrets (64.1%), and TruffleHog (41.2%). We use our open-source classification model for this. [2]

Full methodology and how we evaluated each tool: https://autofix.bot/benchmarks

You can use Autofix Bot interactively on any repository using our TUI, as a plugin in Claude Code, or with our MCP on any compatible AI client (like OpenAI Codex).[3] We’re specifically building for AI coding agent-first workflows, so you can ask your agent to run Autofix Bot on every checkpoint autonomously.

Give us a shot today: https://autofix.bot . We’d love to hear any feedback!

---

[1] https://github.com/ossf-cve-benchmark/ossf-cve-benchmark

[2] https://huggingface.co/deepsource/Narada-3.2-3B-v1

[3] https://autofix.bot/manual/#terminal-ui

RFC 6677 DNS Transport over TCP – Implementation Requirements

Hacker News
www.ietf.org
2025-12-11 21:18:49
Comments...
Original Article
This link caused an XML parsing exception. If this link has an extension('.txt'), maybe we should exclude it. Here's the link: https://www.ietf.org/rfc/rfc7766.txt.

Thousands Tell the Patent Office: Don’t Hide Bad Patents From Review

Electronic Frontier Foundation
www.eff.org
2025-12-11 21:17:47
A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review. EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical,...
Original Article

A massive wave of public comments just told the U.S. Patent and Trademark Office (USPTO): don’t shut the public out of patent review.

EFF submitted its own formal comment opposing the USPTO’s proposed rules, and more than 4,000 supporters added their voices—an extraordinary response for a technical, fast-moving rulemaking. We comprised more than one-third of the 11,442 comments submitted . The message is unmistakable: the public wants a meaningful way to challenge bad patents, and the USPTO should not take that away.

The Public Doesn’t Want To Bury Patent Challenges

These thousands of submissions do more than express frustration. They demonstrate overwhelming public interest in preserving inter partes review (IPR), and undermine any broad claim that the USPTO’s proposal reflects public sentiment.

Comments opposing the rulemaking include many small business owners who have been wrongly accused of patent infringement, by both patent trolls and patent-abusing competitors. They also include computer science experts, law professors, and everyday technology users who are simply tired of patent extortion—abusive assertions of low-quality patents—and the harm it inflicts on their work, their lives, and the broader U.S. economy.

The USPTO exists to serve the public. The volume and clarity of this response make that expectation impossible to ignore.

EFF’s Comment To USPTO

In our filing , we explained that the proposed rules would make it significantly harder for the public to challenge weak patents. That undercuts the very purpose of IPR. The proposed rules would pressure defendants to give up core legal defenses, allow early or incomplete decisions to block all future challenges, and create new opportunities for patent owners to game timing and shut down PTAB review entirely.

Congress created IPR to allow the Patent Office to correct its own mistakes in a fair, fast, expert forum. These changes would take the system backward.

A Broad Coalition Supports IPR

A wide range of groups told the USPTO the same thing: don’t cut off access to IPR.

Open Source and Developer Communities

The Linux Foundation submitted comments and warned that the proposed rules “would effectively remove IPRs as a viable mechanism for challenges to patent validity,” harming open-source developers and the users that rely on them. Github wrote that the USPTO proposal would increase “litigation risk and costs for developers, startups, and open source projects.” And dozens of individual software developers described how bad patents have burdened their work.

Patent Law Scholars

A group of 22 patent law professors from universities across the country said the proposed rule changes “would violate the law, increase the cost of innovation, and harm the quality of patents.”

Patient Advocates

Patients for Affordable Drugs warned in their filing that IPR is critical for invalidating wrongly granted pharmaceutical patents. When such patents are invalidated, studies have shown “cardiovascular medications have fallen 97% in price, cancer drugs dropping 80-98%, and treatments for opioid addiction becom[e] 50% more affordable.” In addition, “these cases involved patents that had evaded meaningful scrutiny in district court.”

Small Businesses

Hundreds of small businesses weighed in with a consistent message: these proposed rules would hit them hardest. Owners and engineers described being targeted with vague or overbroad patents they cannot afford to litigate in court, explaining that IPR is often the only realistic way for a small firm to defend itself. The proposed rules would leave them with an impossible choice—pay a patent troll, or spend money they don’t have fighting in federal court.

What Happens Next

The USPTO now has thousands of comments to review. It should listen. Public participation must be more than a box-checking exercise. It is central to how administrative rulemaking is supposed to work.

Congress created IPR so the public could help correct bad patents without spending millions of dollars in federal court. People across technical, academic, and patient-advocacy communities just reminded the agency why that matters.

We hope the USPTO reconsiders these proposed rules. Whatever happens, EFF will remain engaged and continue fighting to preserve  the public’s ability to challenge bad patents.

Show HN: Gotui – a modern Go terminal dashboard library

Hacker News
github.com
2025-12-11 21:05:27
Comments...
Original Article

gotui

Go Report Card GoDoc License

gotui is a cross-platform and fully-customizable terminal dashboard and widget library built on top of tcell . It is a modern fork of termui , inspired by ratatui and written purely in Go by Carsen Klock.

Logo

Note

This is a modern fork of termui for 2025, heavily upgraded to support TrueColor, modern terminal events, better performance, and new layouts.

Versions

gotui is compatible with Go 1.24+.

Features

  • Backend : Native tcell support for TrueColor (24-bit RGB), mouse events, and resize handling.
  • Gauges : Progress bars and gauges.
  • Charts :
    • BarChart : Stacked and standard bar charts.
    • PieChart : Pie and Donut charts.
    • RadarChart : Spider/Radar charts.
    • TreeMap : Hierarchical data visualization.
    • FunnelChart : Process flow/conversion charts.
    • Sparkline : Mini sparklines.
    • Plot : Line, Scatter, and Braille-mode charts.
  • Maps :
    • World Map : High-resolution world map example using the generic Canvas widget (see _examples/canvas.go ).
  • New Widgets :
    • LineGauge : Thin, character-based progress bar with alignment options (Block, Dots, custom runic styles).
    • Scrollbar : Ratatui-compatible scrollbars (Vertical/Horizontal) with mouse and keyboard support.
    • Logo : Pixel-perfect block-style logo renderer.
  • Performance :
    • Optimized Rendering : Buffer uses flat slices for O(1) access, providing 2-3x speedup.
    • Zero Allocations : Drawing loops minimized for high-fps scenes (~3000 FPS potential).
  • Layout :
    • Grid : Responsive grid layout.
    • Tabs : Tabbed navigation.
    • Interactive : Calendar, Tables, Input, TextArea.
  • Styling :
    • Rounded Borders : Optional rounded corners for blocks.
    • Full RGB Color support.
    • Border titles (Top and Bottom) with alignment (Left, Center, Right).
    • Rich styling parser for text.
    • Collapsed Borders : Support for merging adjacent block borders using BorderCollapse .
  • Compatibility : Works with modern terminals (iTerm2, Kitty, Alacritty, Ghostty).

Installation

Go modules

It is not necessary to go get gotui, since Go will automatically manage any imported dependencies for you.

go get github.com/metaspartan/gotui/v4

Hello World

package main

import (
	"log"

	ui "github.com/metaspartan/gotui/v4"
	"github.com/metaspartan/gotui/v4/widgets"
)

func main() {
	if err := ui.Init(); err != nil {
		log.Fatalf("failed to initialize gotui: %v", err)
	}
	defer ui.Close()

	p := widgets.NewParagraph()
	p.Text = "Hello World!"
	p.SetRect(0, 0, 25, 5)

	ui.Render(p)

	for e := range ui.PollEvents() {
		if e.Type == ui.KeyboardEvent {
			break
		}
	}
}

Widgets

Run an example with go run _examples/{example}.go or run each example consecutively with make run-examples .

Uses

(Submit your projects via a PR)

Acknowledgments

Author(s)

gotui Author: Carsen Klock - X

termui Author: Zack Guo - Github

Related Works

License

MIT

Notepad++ fixes flaw that let attackers push malicious update files

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 21:04:15
Notepad++ version 8.8.9 was released to fix a security weakness in its WinGUp update tool after researchers and users reported incidents in which the updater retrieved malicious executables instead of legitimate update packages. [...]...
Original Article

Notepad++

Notepad++ version 8.8.9 was released to fix a security weakness in its WinGUp update tool after researchers and users reported incidents in which the updater retrieved malicious executables instead of legitimate update packages.

The first signs of this issue appeared in a Notepad++ community forum topic , where a user reported that Notepad++'s update tool, GUP.exe (WinGUp), spawned an unknown "%Temp%\AutoUpdater.exe" executable that executed commands to collect device information.

According to the reporter, this malicious executable ran various reconnaissance commands and stored the output into a file called 'a.txt.'

cmd /c netstat -ano >> a.txt
cmd /c systeminfo >> a.txt
cmd /c tasklist >> a.txt
cmd /c whoami >> a.txt

The autoupdater.exe malware then used the curl.exe command to exfiltrate the a.txt file to a remote site at temp[.]sh.

As GUP uses the libcurl library rather than the actual 'curl.exe' command and does not collect this type of information, other Notepad++ users speculated that the user had installed an unofficial, malicious version of Notepad++ or that the autoupdate network traffic was hijacked.

To help mitigate potential network hijacks, Notepad++ developer Don Ho released version 8.8.8 on November 18th, so that updates can be downloaded only from GitHub.

As a stronger fix, Notepad 8.8.9 was released on December 9th, which will prevent updates from being installed that are not signed with the developer's code-signing certificate.

"Starting with this release, Notepad++ & WinGUp have been hardened to verify the signature & certificate of downloaded installers during the update process. If verification fails, the update will be aborted." reads the Notepad 8.8.9 security notice .

Hijacked update URLs

Earlier this month, security expert Kevin Beaumont warned that he heard from three orgs that were impacted by security incidents linked to Notepad++.

"I've heard from 3 orgs now who've had security incidents on boxes with Notepad++ installed, where it appears Notepad++ processes have spawned the initial access." explained Beaumont .

"These have resulted in hands on keyboard threat actors."

The researcher says that all of the organizations he spoke to have interests in East Asia and that the activity appeared very targeted, with victims reporting hands-on reconnaissance activity after the incidents.

When Notepad++ checks for updates, it connects to https://notepad-plus-plus.org/update/getDownloadUrl.php?version=<versionnumber> . If there is a newer version, the endpoint will return XML data that provides the download path to the latest version:

<GUP>
<script/>
<NeedToBeUpdated>yes</NeedToBeUpdated>
<Version>8.8.8</Version>
<Location>https://github.com/notepad-plus-plus/notepad-plus-plus/releases/download/v8.8.8/npp.8.8.8.Installer.exe</Location>
</GUP>

Beaumont speculated that Notepad++'s autoupdate mechanism might have been hijacked in these incidents to push malicious updates that grant threat actors remote access.

"If you can intercept and change this traffic, you can redirect the download to any location it appears by changing the URL in the <Location> property," explained Beaumont.

"Because traffic to notepad-plus-plus.org is fairly rare, it may be possible to sit inside the ISP chain and redirect to a different download. To do this at any kind of scale requires a lot of resources," continued the researcher.

However, Beaumont noted that it is not uncommon for threat actors to use malvertising to distribute malicious versions of Notepad++ that install malware.

Notepad++'s security notice shares the same uncertainty, stating that they are still investigating how the traffic is being hijacked.

"The investigation is ongoing to determine the exact method of traffic hijacking. Users will be informed once tangible evidence regarding the cause is established," reads the security notice .

The developer states that all Notepad++ users should upgrade to the latest version, 8.8.9. They also noted that since v8.8.7, all official binaries and installers are signed with a valid certificate, and users who previously installed an older custom root certificate should remove it.

BleepingComputer contacted Notepad++'s developer on December 3rd with questions about the incidents but did not receive a reply.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Almond (YC X25) Is Hiring SWEs and MechEs

Hacker News
www.ycombinator.com
2025-12-11 21:00:10
Comments...
Original Article

Robots designed for the era of AI

Jobs at Almond

San Francisco, CA, US

$110K - $200K

0.50% - 2.00%

1+ years

San Francisco, CA, US

$130K - $200K

0.50% - 2.00%

Any (new grads ok)

Why you should join Almond

Our mission is to free humans from physical labor with robotics.

We imagine a future where robots handle the essential, repetitive work and humans are free to create, connect, and pursue what truly matters to them.

To build that future we’re starting from the ground up with hardware. Our first product is a California-designed and assembled humanoid arm. Surrounding it, we’re developing advanced controls, intuitive data collection, and a full AI stack that makes deployment effortless in real industrial environments. We’re proving it on our own assembly line first.

Almond

Founded: 2025

Batch: X25

Team Size: 2

Status: Active

Founders

Yearn Finance hacked for the third time

Web3 Is Going Great
web3isgoinggreat.com
2025-12-11 20:55:34
Yearn Finance, a defi yield protocol, has suffered another hack. The exploiter took advantage of bugs in the project's smart contract to drain assets from several of its pools by minting a huge number of yETH tokens and then withdrawing the corresponding asset in the pools.$2.4 million of t...
Original Article

Yearn Finance, a defi yield protocol, has suffered another hack. The exploiter took advantage of bugs in the project's smart contract to drain assets from several of its pools by minting a huge number of yETH tokens and then withdrawing the corresponding asset in the pools.

$2.4 million of the stolen assets, which were denominated in pxETH, a liquid staking token issued by Redacted Cartel, were recovered after the issuer burned the stolen tokens and reissued them to the team's wallet — essentially, removing the tokens from the hacker's wallet. However, the hacker routed the remaining funds through the Tornado Cash cryptocurrency mixer , which makes recovery substantially more challenging.

This is the third time Yearn Finance has been hacked, following an $11 million exploit in 2023 and another $11 million exploit in 2021 . Yearn also suffered around $1.4 million in losses in 2023 in connection to the Euler Finance attack .

Malicious VSCode Marketplace extensions hid trojan in fake PNG file

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 20:54:21
A stealthy campaign with 19 extensions on the VSCode Marketplace has been active since February, targeting developers with malware hidden inside dependency folders. [...]...
Original Article

Malicious VSCode Marketplace extensions hid trojan in fake PNG file

A stealthy campaign with 19 extensions on the VSCode Marketplace has been active since February, targeting developers with malware hidden inside dependency folders.

The malicious activity was uncovered recently, and security researchers found that the operator used a malicious file posing as a .PNG image.

The VSCode Market is Microsoft’s official extensions portal for the widely used VSCode integrated development environment (IDE), allowing developers to extend its functionality or add visual customizations.

Due to its popularity and potential for high-impact supply-chain attacks, the platform is constantly targeted by threat actors with evolving campaigns.

ReversingLabs, a company specializing in file and software supply-chain security, found that the malicious extensions come pre-packaged with a ‘ node_modules ’ folder to prevent VSCode from fetching dependencies from the npm registry when installing them.

Inside the bundled folder, the attacker added a modified dependency, ‘ path-is-absolute ’ or ‘ @actions/io ,’ with an additional class in the ‘ index.js ’ file that executes automatically when starting the VSCode IDE.

Malicious code added to the index.js file
Malicious code added to the index.js file
Source: ReversingLabs

It should be noted that ‘ path-is-absolute ’ is a massively popular npm package with 9 billion downloads since 2021, and the weaponized version existed only in the 19 extensions used in the campaign.

The code introduced by the new class in the ‘index.js’ file decodes an obfuscated JavaScript dropper inside a file named ' lock '. Another file present in the dependencies folder is an archive posing as a .PNG ( banner.png ) file that hosts two malicious binaries: a living-off-the-land binary (LoLBin) called ' cmstp.exe ' and a Rust-based trojan.

ReversingLabs is still analyzing the trojan to determine its full capabilities.

According to the researchers, the 19 VSCode extensions in the campaign use variations of the following names, all published with the version number 1.0.0:

  • Malkolm Theme
  • PandaExpress Theme
  • Prada 555 Theme
  • Priskinski Theme

ReversingLabs reported them to Microsoft, and BleepingComputer confirmed that all of them have been removed. However, users who installed the extensions should scan their system for signs of compromise.

Because threat actors find new ways to evade detection on public repositories used for software development, it is recommended that users inspect packages before installation, especially when the source is not a reputable publisher.

They should carefully comb through dependencies, especially when they are bundled in the package, as is the case with VS Code extensions, and not pulled from a trusted source, as it happens with npm.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

bidicalc: a bidirectional calculator

Lobsters
victorpoughon.github.io
2025-12-11 20:48:49
Comments...
Original Article

A spreadsheet where formulas also update backwards



In any normal spreadsheet, when you change values that are the input to some formulas, the outputs are automatically updated:

bidicalc-figure0

Could it also work the other way? What if you could also change the output, and have the inputs be updated to match the formula?

bidicalc-figure1

For the past few months I've been obsessed really curious about this idea. But there were so many questions:

  • Would it even be possible at all?
  • Could it work with very complex formulas? With exponents? With advanced math functions like log() , abs() , etc?
  • How would the UX work? In a normal spreadsheet, when you click on a cell that has a formula, you get to change the formula's expression. I would need a way to let the user change either the formula's expression or the cell's numeric value.
  • What should happen if there are multiple possible solutions? Like in the example above, if you set A3 to 100, should the result be 50/50, 20/80, -10000/10100? When there is a infinite number of possible solutions, how to pick one?
  • Could it work with chained formulas? Could I build a long chain of formulas, update the final value and find the matching inputs all the way backwards?

Ok, now let's just skip to the good part! Today I'm happy to introduce:


bidicalc — a bidirectional calculator


User guide

Type of cells

Variables
A simple number entered in a cell is a variable: 1.0 . It may be changed by the solver.

Constant
A number prefixed by a hash # is a constant. It will not be changed by the solver.

Text
Cells can be in text mode. To input text, wrap in double quotes: "Distance (km)" .

Formula
Formulas can be entered in a cell (the traditional = prefix is optional), for example:

A1 + A2
A1 + A2*(A3 - A1)^2
exp(A1)
#A1 + H5 * #H6
cos(2*pi())

The result of formulas will be automatically updated when an input they depend on changes. This is the usual forward update .

The magic of bidicalc is that once a formula has been computed, you can change the result . Bidicalc will walk "upstream" to change variable cells so that the formula's result matches the change you made. This is the backward update .

TIP

To change a cell formula's expression instead of its result, click on the F icon.

Supported functions

  • Arithmetic operators: addition + , subtraction - , multiplication * , division / , exponentiation ^ .
  • sqrt(x) : square root of x
  • pow(a, b) : exponentiation, a raised to the power of b
  • pi() : the π constant
  • abs(x) : absolute value of x
  • log(x) / ln(x) : natural logarithm of x
  • exp(x) : exponential, the value of e x
  • cos(x) / sin(x) / tan(x) : cosine / sine / tangent

Keyboard shortcuts

Ctrl + <arrows> - Navigate the grid
Enter - Move down
Shift-Enter - Move up
Tab - Move right
Shift-Tab - Move left

The backwards solver

The solver will try its best to find a solution. However it can fail in different ways:

  • The solution is incorrect.
    This is a bug and should not happen: please report it on GitHub , thank you!

  • The solver reports "no solution", but there is one. This could be a bug in the solver, or you have found a particularly difficult root finding problem that has solutions that are very difficult to find using floating point arithmetic. Please report it on GitHub so I can use it to improve the solver 😃

  • The solution is technically correct but unexpected.
    This can happen for a large class of problems, typically when there are a lot of free variables (the problem is heavily underdetermined) and the solution manifold is weird. For example, try to solve a*b*c = 1 to see this in action. To combat this, you can:

    • Set some variables to constants using the hash syntax, i.e.: #50 .
    • Reformulate the problem with fewer free variables.
    • Wait for me to implement more features like domain restrictions of variables.
    • Suggest improvements to the open-source solver on GitHub .

DANGER

Keep in mind this is an experiment I made for fun because I like math and spreadsheets. If you need to do root finding to compute the load tolerance of a one milion ton suspended bridge please don't use bidicalc 😄

How does it work?

Even a normal spreadsheet is fairly complex beast. But the novel thing about bidicalc is the backwards solver. Mathematically, updating a spreadsheet "backward" is a (potentially underdetermined) root finding problem, because we are trying to find a vector of unknowns x such that F ( x ) G = 0 , where F is the function computed by the cells formulas, and G is the objective value entered in the cell. Note that F is not necessarily a single formula, but the result of composing an upstream graph of cells into a single function.

The actual root-finding solver is a custom algorithm that I made. It a general purpose algorithm that will find one root of any continuous-almost-everywhere function for which a complete syntactic expression is known. It uses a mix of continuous constraint propagation on interval union arithmetic , directional Newton's method and dichotomic search. It is of course limited by floating point precision and available computation time.

Bidicalc is written in TypeScript and entirely open-source under the AGPL licence. This means that you can freely reuse, modify, and share bidicalc as long as you make the complete source code of your modified version available under the same licence. If you are interested in buying bidicalc under a different licence please get in touch.

I haven't taken the time to write a full deep-dive mathematical explanation of how it works, but if you are interested in that please let me know . I might find some time to do it if there is interest from fellow math nerds.

Future improvements

If I kept improving bidicalc until it was perfect I would have never released it. So currently it is imperfect and could be improved in a number of ways.

  • Domain restriction for variables. Currently the solver may assign any value in the interval [ 10 10 , 10 10 ] . I'd like to add special syntax so that variable cells can be restricted by the user to a specific interval. This would allow guiding the solver and saying that you only want this cell to be positive, of to be between 1 and 100, for example.
  • Solver improvements. The algorithm works well enough for simple problems so I'm happy to publish in this current state, but could always be improved. There are a milion ways to improve it in the future so that it finds better solutions, particularly for highly underdetermined cases.
  • float64 gradients support. Due to a pretty obscure technical limitation of tensorflowjs (that I use to compute gradients), the backward solver is partially limited to single precision, even though the forward solver uses double precision via native JS numbers.
  • UX improvements. I am not very good at front-end dev 😄. I have learned vuejs to be able to make the UX for bidicalc but I'm not great at it. A spreadsheet interface is actually a massive state machine of complex and subtle behavior, it's a very interesting project and tricky to get right. As you can see, I've decided to skip the usual spreadsheet design principle that cells have two selection states: soft selected which enables dragging, selection, etc. and hard selected which enables changing the content of the cell. bidicalc is simply a CSS grid of <input> elements.
  • Move cell computation off the main thread. The solver is single threaded and happens in the UI thread. It should be moved to a web worker to avoid locking the UI.

About me

My name is Victor Poughon , I enjoy math and open-source software. If you want to see me do more stuff like this consider sponsoring me on GitHub or Buying me a coffee .

Thank you ❤️

Denial of service and source code exposure in React Server Components

Hacker News
react.dev
2025-12-11 20:46:46
Comments...
Original Article

December 11, 2025 by The React Team


Security researchers have found and disclosed two additional vulnerabilities in React Server Components while attempting to exploit the patches in last week’s critical vulnerability.

These new vulnerabilities do not allow for Remote Code Execution. The patch for React2Shell remains effective at mitigating the Remote Code Execution exploit.


The new vulnerabilities are disclosed as:

These issues are present in the patches published last week.

We recommend upgrading immediately due to the severity of the newly disclosed vulnerabilities.

Note

It’s common for critical CVEs to uncover follow‑up vulnerabilities.

When a critical vulnerability is disclosed, researchers scrutinize adjacent code paths looking for variant exploit techniques to test whether the initial mitigation can be bypassed.

This pattern shows up across the industry, not just in JavaScript. For example, after Log4Shell , additional CVEs ( 1 , 2 ) were reported as the community probed the original fix.

Additional disclosures can be frustrating, but they are generally a sign of a healthy response cycle.

Further details of these vulnerabilities will be provided after the rollout of the fixes are complete.

Immediate Action Required

These vulnerabilities are present in the same packages and versions as CVE-2025-55182 .

This includes versions 19.0.0, 19.0.1 19.1.0, 19.1.1, 19.1.2, 19.2.0 and 19.2.1 of:

Fixes were backported to versions 19.0.2, 19.1.3, and 19.2.2. If you are using any of the above packages please upgrade to any of the fixed versions immediately.

As before, if your app’s React code does not use a server, your app is not affected by these vulnerabilities. If your app does not use a framework, bundler, or bundler plugin that supports React Server Components, your app is not affected by these vulnerabilities.

Note

The patches published last week are vulnerable.

If you already updated for the Critical Security Vulnerability, you will need to update again.

Affected frameworks and bundlers

Some React frameworks and bundlers depended on, had peer dependencies for, or included the vulnerable React packages. The following React frameworks & bundlers are affected: next , react-router , waku , @parcel/rsc , @vite/rsc-plugin , and rwsdk .

Please see the instructions in the previous post for upgrade steps.

Hosting Provider Mitigations

As before, we have worked with a number of hosting providers to apply temporary mitigations.

You should not depend on these to secure your app, and still update immediately.

React Native

For React Native users not using a monorepo or react-dom , your react version should be pinned in your package.json , and there are no additional steps needed.

If you are using React Native in a monorepo, you should update only the impacted packages if they are installed:

  • react-server-dom-webpack
  • react-server-dom-parcel
  • react-server-dom-turbopack

This is required to mitigate the security advisories, but you do not need to update react and react-dom so this will not cause the version mismatch error in React Native.

See this issue for more information.

High Severity: Denial of Service

CVE: CVE-2025-55184 Base Score: 7.5 (High)

Security researchers have discovered that a malicious HTTP request can be crafted and sent to any Server Functions endpoint that, when deserialized by React, can cause an infinite loop that hangs the server process and consumes CPU. Even if your app does not implement any React Server Function endpoints it may still be vulnerable if your app supports React Server Components.

This creates a vulnerability vector where an attacker may be able to deny users from accessing the product, and potentially have a performance impact on the server environment.

The patches published today mitigate by preventing the infinite loop.

Medium Severity: Source Code Exposure

CVE: CVE-2025-55183 Base Score : 5.3 (Medium)

A security researcher has discovered that a malicious HTTP request sent to a vulnerable Server Function may unsafely return the source code of any Server Function. Exploitation requires the existence of a Server Function which explicitly or implicitly exposes a stringified argument:

'use server';

export async function serverFunction(name) {

const conn = db.createConnection('SECRET KEY');

const user = await conn.createUser(name); // implicitly stringified, leaked in db

return {

id: user.id,

message: `Hello, ${name}!` // explicitly stringified, leaked in reply

}}

An attacker may be able to leak the following:

0:{"a":"$@1","f":"","b":"Wy43RxUKdxmr5iuBzJ1pN"}

1:{"id":"tva1sfodwq","message":"Hello, async function(a){console.log(\"serverFunction\");let b=i.createConnection(\"SECRET KEY\");return{id:(await b.createUser(a)).id,message:`Hello, ${a}!`}}!"}

The patches published today prevent stringifying the Server Function source code.

Note

Only secrets in source code may be exposed.

Secrets hardcoded in source code may be exposed, but runtime secrets such as process.env.SECRET are not affected.

The scope of the exposed code is limited to the code inside the Server Function, which may include other functions depending on the amount of inlining your bundler provides.

Always verify against production bundles.


Timeline

  • December 3rd : Leak reported to Vercel and Meta Bug Bounty by Andrew MacPherson .
  • December 4th : Initial DoS reported to Meta Bug Bounty by RyotaK .
  • December 6th : Both issues confirmed by the React team, and the team began investigating.
  • December 7th : Initial fixes created and the React team began verifying and planning new patch.
  • December 8th : Affected hosting providers and open source projects notified.
  • December 10th : Hosting provider mitigations in place and patches verified.
  • December 11th : Additional DoS reported to Meta Bug Bounty and added to patch.
  • December 11th : Patches published and publicly disclosed as CVE-2025-55183 and CVE-2025-55184 .

Attribution

Thank you to Andrew MacPherson (AndrewMohawk) for reporting the Source Code Exposure, RyotaK from GMO Flatt Security Inc for reporting the initial Denial of Service vulnerability.

UK House of Lords attempting to ban use of VPNs by anyone under 16

Hacker News
alecmuffett.com
2025-12-11 20:32:22
Comments...
Original Article

This is deranged, each nation’s boomers and reactionaries attempting to outdo the others :

“Action to prohibit the provision of VPN services to children in the United Kingdom ” … the provider of any Relevant VPN Service which is, or is likely to be — (i) offered or marketed to persons in the United Kingdom; (ii) provided to a significant number of persons. (c) must make provision for the monitoring and effective enforcement of the child VPN prohibition.


VPNs are a technology which anyone can implement for themselves . “Regulatory compliance” of them is not feasible, it’d be like banning DIY.

Not to mention it would include The Tor Project.

sled: A command-line utility for Advent of Code written in Janet

Lobsters
github.com
2025-12-11 20:30:22
Comments...
Original Article

Sled

Latest Release Test Status

Sled is the Seasonal Linear Enigma Device , a command-line utility for Advent of Code .

Usage: sled [--session <file>] <subcommand> [<args>]

Seasonal Linear Enigma Device, a command-line utility for Advent of Code.

Options:

 -s, --session <file>    A file that contains the session ID for the user's
                         logged in session. (Default: session.txt)
 -h, --help              Show this help message.

Subcommands:

 a, answer       Submit an answer.
 c, calendar     Display the calendar.
 p, puzzle       Download a puzzle.

For more information on each subcommand, type 'sled help <subcommand>'.

Requirements

Sled uses the curl command-line utility to communicate with the Advent of Code servers. It must be on the PATH of the user that runs sled .

Installing

Homebrew

The latest release of sled is available via Homebrew for macOS (Apple Silicon) and Linux (x86-64 and aarch64):

$ brew tap pyrmont/sled https://github.com/pyrmont/sled
$ brew install sled

Jeep

If you use Janet, you can install sled using Jeep :

$ jeep install https://github.com/pyrmont/sled

Pre-Built

Pre-built binaries of sled are available as tarballs via the Releases section on GitHub for:

  • FreeBSD 14 (x86-64 and aarch64)
  • Linux (x86-64 and aarch64)
  • macOS (aarch64)
$ curl -LO https://github.com/pyrmont/sled/releases/latest/download/sled-v<version>-<platform>-<arch>.tar.gz
$ tar -xzf sled-v<version>-<platform>-<arch>.tar.gz
$ cd sled-v<version>
# use sudo or doas depending on the permissions of the target directories
$ sudo cp sled /usr/local/bin/ # or somewhere else on your PATH
$ sudo cp sled.1 /usr/local/share/man/man1/ # or somewhere else on your MANPATH

From Source

To build the sled binary from source, you need Janet installed on your system. Then run:

$ git clone https://github.com/pyrmont/sled
$ cd sled
$ git tag --sort=creatordate
$ git checkout <version> # check out the latest tagged version
$ janet --install .

Configuring

Sled requires your Advent of Code session cookie to authenticate with the Advent of Code servers. To get your session cookie:

  1. log in to Advent of Code
  2. open your browser's developer tools
  3. go to the Storage tab
  4. find the cookie named session
  5. copy its value to a file (such as session.txt )

Using

Run sled --help for usage information. The command-line arguments are explained in more detail in the man page .

Downloading Puzzles

Download a puzzle for a specific year and day:

$ sled puzzle --year 2025 --day 1

This downloads both the puzzle explanation and your puzzle input. The puzzle explanation is converted from HTML to a text-friendly format.

By default, Sled puts the files for each day into a subdirectory with a name that matches that day (e.g. ./day01/puzzle.txt and ./day01/input.txt ). To save files without creating any subdirectories, use the --no-subdirs option.

Submitting Answers

Submit an answer for a specific part:

$ sled answer --year 2025 --day 1 --part 1 <answer>

Viewing the Calendar

Display your Advent of Code calendar with ASCII art and completion status:

$ sled calendar --year 2025

The calendar displays:

  • the ASCII art calendar for that year
  • gold stars ( ** ) for puzzles you've completed
  • ANSI 256 colours

To disable colours:

$ sled calendar --year 2025 --no-color

Bugs

Found a bug? I'd love to know about it. The best way is to report your bug in the Issues section on GitHub.

Licence

Sled is licensed under the MIT Licence. See LICENSE for more details.

The HTML-First Approach: Why htmx and Lightweight Frameworks Are Revolutionizing Web Development

Lobsters
www.danieleteti.it
2025-12-11 20:20:24
Comments...
Original Article

For years, when it came to building something “modern” on the web, the almost automatic choice fell on React, Angular, Vue, and the entire Single Page Application (SPA) ecosystem. These frameworks became the safe choice, almost a de facto standard. But lately, a significant shift is happening in the front-end landscape. Many teams — including some large ones with enterprise projects — are moving toward HTML-first frameworks like htmx and other tools that take a more traditional, server-driven approach.

And honestly, it makes perfect sense. 🎯

Not every application needs a heavy client-side engine. In many cases, the SPA model adds more complexity than value. HTML-first frameworks bring back some of the simplicity and speed the web was originally designed for, without sacrificing the interactivity users expect.

In this article, we’ll explore in depth the reasons behind this trend, backed by concrete data and statistics.

The JavaScript Bloat Problem: The Numbers Speak Clearly 📊

Before analyzing the advantages of the HTML-first approach, it’s essential to understand the magnitude of the problem we’re facing.

The Exponential Growth of JavaScript

The HTTP Archive data is eloquent: the average amount of JavaScript transferred per page has grown from 90 KB in 2010 to 650 KB in 2024 . And this trend shows no signs of slowing down.

But these are just average values. A detailed 2024 analysis reveals far more extreme cases:

  • Slack , a chat application, loads 55 MB of JavaScript — practically the size of the original Quake 1 with all resources included
  • Jira , a task management software, weighs almost 50 MB
  • LinkedIn reaches 31 MB
  • Simple social network “Like” buttons typically require 12 MB of code
  • Even Google Maps , relatively modest by modern standards, weighs 4.5 MB

If we assume an average line of code is about 65 characters, we’re talking about shipping approximately 150,000 lines of code with every website, sometimes just to display static content!

💡 Ever thought about it? Slack, a messaging app, requires more space than an entire 3D video game from the 90s. To send text messages.

The Performance Impact

JavaScript is the most computationally expensive resource a browser has to handle. It’s often the bottleneck that determines whether a page appears fast or slow, as an oversized bundle can block rendering and degrade overall performance.

For those using a high-end laptop with fiber connection, this might be just a minor annoyance. But for those browsing with a low-end phone or unstable connection, it can make the difference between staying on the site or abandoning it completely.

Framework Sizes Compared

Here’s a comparison of the major JavaScript frameworks’ sizes (gzipped versions), according to data from LogRocket and Strapi :

Framework Gzipped Size
Angular ~62.3 KB
React + ReactDOM ~44.5 KB
Vue ~34.7 KB
htmx ~14 KB

htmx weighs about one-third of Vue and less than one-quarter of Angular . And these are just the core frameworks — real applications typically include libraries for routing, state management, form handling, HTTP client, and much more.

Building with HTML Instead of Fighting with JavaScript Layers 🛠️

HTML-first tools let you focus on the actual structure and behavior of your application, instead of juggling component trees, hydration, reducers, context providers, and all the overhead that comes with SPA frameworks.

With htmx, for example, you simply send HTML from the server, and the library swaps the right parts on the page. No 20-file folders for a single React component. No client-side state management libraries. No over-engineering.

It’s an approach that feels surprisingly straightforward and linear.

A Practical Example

Let’s consider a simple button that loads dynamic content.

Traditional SPA approach (React):

// useState, useEffect, fetch API, loading state management,
// error handling, conditional rendering...
function LoadDataButton() {
  const [data, setData] = useState(null);
  const [loading, setLoading] = useState(false);
  const [error, setError] = useState(null);

  const handleClick = async () => {
    setLoading(true);
    try {
      const response = await fetch('/api/data');
      const json = await response.json();
      setData(json);
    } catch (e) {
      setError(e);
    } finally {
      setLoading(false);
    }
  };

  return (
    <div>
      <button onClick={handleClick} disabled={loading}>
        {loading ? 'Loading...' : 'Load Data'}
      </button>
      {error && <div className="error">{error.message}</div>}
      {data && <DataDisplay data={data} />}
    </div>
  );
}

HTML-first approach with htmx:

<button hx-get="/data"
        hx-target="#result"
        hx-indicator="#loading">
  Load Data
</button>
<span id="loading" class="htmx-indicator">Loading...</span>
<div id="result"></div>

The server returns the HTML ready to display directly. End of story.

The difference is clear : 30+ lines of React code vs 5 lines of HTML with htmx. Same result, radically different complexity.

Performance Improves Almost Automatically ⚡

SPAs ship tons of JavaScript to the browser — and the browser pays for it every time: parsing, executing, hydrating, diffing the virtual DOM, and so on.

HTML-first frameworks work the opposite way. They load fast because the browser handles what it’s best at: rendering HTML. Interactivity is added in small, targeted pieces instead of shipping an entire runtime.

The Data Confirms

According to a 2024 study :

  • The median Time to Interactive (TTI) for SPAs was 2.9 seconds , versus 1.8 seconds for SSR sites
  • The median Time to First Byte (TTFB) was 0.6 seconds for SPAs, versus 0.2 seconds for SSR

Users — especially mobile users — feel the difference immediately.

Why Server-Side Rendering is Faster for Content

SSR’s advantage lies primarily in the faster time to display content, which becomes more evident on slow Internet connections or underpowered devices. Server-rendered markup doesn’t need to wait for all JavaScript to be downloaded and executed to be displayed, so users see a fully rendered page sooner.

This generally translates to better Core Web Vitals metrics, superior user experience, and can be critical for applications where content display time is directly associated with conversion rate.

Server-Driven UI is Cleaner for Most Business Applications 🏢

Most real logic — validation, business rules, access control — lives on the server anyway. HTML-first frameworks don’t force you to duplicate it on both sides.

Instead of creating an API endpoint, transforming it, consuming it, and syncing everything in the client… you just render HTML with the updated state.

Simple. Predictable. Easy to understand and debug.

One Routing, One Source of Truth

One of the most underrated aspects of the HTML-first approach is the elimination of routing duplication. In traditional SPAs, you inevitably end up with two parallel routing systems: one on the server (for APIs and initial rendering) and one on the client (for internal navigation). This means double configuration, double maintenance, and often hard-to-debug inconsistencies.

With htmx and the server-driven approach, there’s only one routing : the server’s. URLs correspond directly to resources, the browser handles navigation natively, and the developer has a single source of truth to maintain.

Validation: Only Where It Really Matters

The same principle applies to formal checks and data validation. In SPA architectures, developers often find themselves implementing the same validation logic twice: on the client (to provide immediate feedback and improve UX) and on the server (where checks MUST always reside for security reasons).

With the HTML-first approach, validation stays where it belongs: on the server . And thanks to the speed of partial HTML responses, user feedback is still almost instant. The server validates the data and returns the HTML with any error messages already rendered. No logic duplication, no risk of inconsistencies between client and server rules, no possibility of a malicious user bypassing client-side checks.

What if you want to implement a client-side check to avoid too many server requests? You can always do it! The fundamental difference is that you’re not forced to implement validation twice — you can choose to do it only when it makes sense for UX, knowing that server-side validation is already guaranteed.

🔐 Security note : Client-side validation is always bypassable. A malicious user can simply disable JavaScript or modify HTTP requests. Server-side validation is not optional — it’s the only one that really matters for security.

The HATEOAS Pattern Revives

The HTML-first approach aligns with the HATEOAS architectural principle (Hypermedia as the Engine of Application State), one of the original REST constraints often ignored in modern JSON APIs.

But what exactly does this principle say? HATEOAS states that a client should be able to interact with an application entirely through the hypermedia responses dynamically provided by the server. In other words, the client should have no prior knowledge of how to interact with the server beyond an initial entry point — all possible actions should be discovered dynamically through the links and controls present in the response itself.

Think about it: when the server returns HTML, it automatically returns links to related pages, forms for available actions, buttons for permitted operations. The interface is the API. The browser already knows how to navigate links and submit forms — no client-side logic is needed to “interpret” the response and decide what to do next.

Can your JSON-APIs do this? In the vast majority of cases, no. JSON APIs return raw data that the client must interpret, and navigation and interaction logic must be implemented separately in the front-end code. The client must “know” beforehand which endpoints to call, how to construct requests, and how to interpret responses — effectively violating the HATEOAS principle.

Let’s make a concrete example: a customer list that shows the individual customer’s detail on click.

With a traditional JSON-API:

{
  "customers": [
    {"id": 1, "name": "Mario Rossi", "email": "mario@example.com"},
    {"id": 2, "name": "Luigi Verdi", "email": "luigi@example.com"}
  ]
}

The client receives this data and must know beforehand that to get the detail it needs to call /api/customers/{id} . This knowledge is hardcoded in the front-end JavaScript code. If the URL changes, the client breaks. If there are access rules that prevent viewing certain customers, the client doesn’t know until it tries to call the endpoint and receives an error.

With htmx and the HTML-first approach:

<ul>
  <li>
    <a hx-get="/customers/1" hx-target="#detail">Mario Rossi</a>
  </li>
  <li>
    <a hx-get="/customers/2" hx-target="#detail">Luigi Verdi</a>
  </li>
</ul>
<div id="detail"></div>

The server returns the HTML with links already embedded. The client doesn’t need to “know” anything — it simply follows the links present in the response. If a user doesn’t have access to a certain customer, the server simply doesn’t include that link in the list. If the URL changes, the server generates the new links and the client continues working without modifications.

Which of the two approaches respects HATEOAS? Only the second. HTML is hypermedia by definition — it was designed exactly for this purpose.

To be precise, a well-designed JSON-API should include a links property for related resource discovery:

{
  "customers": [
    {
      "id": 1,
      "name": "Mario Rossi",
      "email": "mario@example.com",
      "links": {
        "self": "/api/customers/1",
        "orders": "/api/customers/1/orders"
      }
    }
  ],
  "links": {
    "self": "/api/customers",
    "next": "/api/customers?page=2"
  }
}

But let’s be honest: the vast majority of JSON-APIs in production don’t implement these links. And even when they do, the problem isn’t solved: the client still needs to have a UI ready and capable of interpreting that data and displaying it appropriately. JSON links tell where to go, but not how to present what’s found. The client still needs to “know” that a customer has a name, an email, and that orders should be displayed in a table with certain columns.

With HTML, instead, the representation is already included in the response. No client-side rendering logic needed.

With htmx, the server doesn’t just return data, but returns complete user interface representations with links and available actions already embedded. This eliminates an entire category of problems related to state synchronization between client and server.

Less Code and Fewer Dependencies 📦

One of the biggest benefits is simply a smaller codebase:

  • No huge bundle to optimize and debug
  • No complicated build pipeline (webpack, babel, typescript config, etc.)
  • Fewer moving parts that can break
  • Easier onboarding for new team members
  • Fewer version upgrade headaches

This also translates to fewer bugs and faster long-term maintenance.

The Hidden Cost of Complexity

Every JavaScript library you add to the project brings a cost that goes well beyond the kilobytes of the initial download.

Take a typical example: you want to format a date. You install moment.js (or a modern alternative). But that library has its dependencies, which in turn have others. Suddenly, to format “2025-01-15” as “January 15, 2025”, you’ve added hundreds of kilobytes to your bundle.

And it doesn’t end there. Every library:

  • Has its own release cycle with breaking changes
  • Can have security vulnerabilities requiring urgent updates
  • Must be compatible with all other project libraries
  • Adds time to build and deploy

Many developers install libraries like lodash, axios, or moment just to use a single function. It’s like buying an entire toolbox just to use the screwdriver.

⚠️ Warning : Every dependency is a potential breaking point. JavaScript supply chain incidents continue to occur regularly — from compromised packages to maintainers abandoning critical projects. Fewer dependencies = fewer risks.

With the HTML-first approach, much of this complexity simply disappears. Date formatting happens on the server (where you have full control over the format), HTTP requests are handled by htmx with a few attribute lines, and the DOM is automatically updated with the received HTML. No need to reinvent the wheel with 200 KB of JavaScript.

Progressive Enhancement: Evolution, Not Rewrite 🔄

You can add htmx to almost any existing backend without overhauling the entire project structure.

Need a dynamic table? Enhance that section.

Need modal interactions without a full SPA? Swap in the HTML on the fly.

It’s not an “all or nothing” decision — you improve the interface step by step.

The Three Levels of Progressive Enhancement

Progressive enhancement is based on three fundamental principles:

  1. Content First : At the heart of every website is its content. Progressive enhancement ensures that content is accessible to all users, even those with very basic browsers or slow Internet connections. This means starting from a solid HTML foundation.

  2. Basic Functionality : Ensure that core features work without advanced JavaScript. With htmx this is possible if you design carefully: links can have a standard href as fallback, forms can work with a normal submit. htmx enhances HTML’s standard behavior. However, it’s important to note that features based on buttons with hx-get or hx-post (that aren’t form submits) require JavaScript — it’s up to the developer to decide where graceful degradation is important and design accordingly.

  3. Enhanced Experiences : Progressively add advanced features for browsers and devices that support them.

Wikipedia, powered by MediaWiki, is a perfect example : it’s readable, navigable, and even editable using the basic HTML interface without styling or scripts, but it’s enhanced when these are available.

SEO, Accessibility, and Browser Behavior Just Work 🌐

When your app uses real HTML instead of a Virtual DOM, you avoid a lot of accidental problems that SPAs introduce. The back button, deep linking, accessibility tools, and SEO all behave naturally.

No hacks required.

SEO Benefits

Since the base content is always accessible to search engine spiders, pages built with progressive enhancement methods avoid the problems that can hinder indexing, while having to render the page’s base content through JavaScript execution makes crawling slow and inefficient.

This strategy speeds up loading and makes crawling by search engines easier, as the text on the page loads immediately through the HTML source code rather than having to wait for JavaScript to initialize and load content later.

Search engines prioritize accessible and easy-to-read content. Starting from a clean HTML structure and ensuring content is available to all users improves the chances of ranking well in search results.

Ready for the AI Search Era

There’s an aspect that many developers aren’t yet considering: the rise of AI-based search engines. Tools like ChatGPT Search, Perplexity, Google AI Overviews, and others are radically changing how users find information online.

These AI systems need clear, structured, and immediately accessible content . While some modern crawlers can execute JavaScript, execution is often limited, delayed, or incomplete. A SPA that loads content dynamically via JavaScript risks being indexed partially, inaccurately, or with significant delays compared to sites with immediate HTML content.

With the HTML-first approach, content is already present in the HTML document served by the server. AI crawlers (as well as traditional search engine spiders) can immediately access, understand, and index all content. No waiting for JavaScript execution, no risk of parts of the page not being seen.

In a world where more and more traffic will come from AI-generated responses, having easily accessible and well-structured content is no longer just a best practice — it’s a competitive necessity.

🤖 Prepare for the future : AI crawlers are becoming increasingly important. A site invisible to AI is a site losing opportunities. HTML-first puts you in pole position.

Accessibility Benefits

One of the most significant benefits of progressive enhancement is improved accessibility. Starting from a solid HTML foundation ensures that content is accessible to all users, including those with disabilities.

Screen readers and other assistive technologies work natively with semantic HTML. SPAs, on the other hand, often require complex ARIA implementations and specialized testing to reach the same level of accessibility.

htmx: The Rising Star ⭐

htmx is experiencing impressive growth. According to JavaScript Rising Stars 2024 , htmx gained more annual GitHub stars than more established libraries like Vue and Angular (note: this refers to annual star growth, not total).

The Numbers of Success

  • In the Stack Overflow Developer Survey 2024 , htmx is the 22nd most used web framework with 3.3% of developers using it
  • In terms of satisfaction, htmx is the 2nd most “admired” web framework with 72.9% in the Stack Overflow 2024 survey, second only to Elixir’s Phoenix framework (83.7%)
  • In the Django Developers Survey , htmx usage went from 5% in 2021 to 16% in 2022 — 220% growth
  • htmx 2.0.0 was released on June 17, 2024, marking an important maturity milestone
  • htmx was admitted to the GitHub Accelerator in 2023, a program that selects the most promising open source projects

What Makes htmx Special

htmx is small (~14-16 KB min.gz), dependency-free, extensible, and according to reports , has reduced code size by 67% compared to React in comparable projects.

htmx doesn’t just reduce bundle size — it eliminates the need for Virtual DOM diffing, component lifecycles, and client-side state orchestration. The result is faster page loads, less code to debug, and a lighter mental model for building user interfaces.

Ideal for Enterprise Apps, Dashboards, and Portals 💼

Not every application is a highly interactive design tool. Many of the systems companies build — booking platforms, dashboards, admin tools, forms, internal portals — fit perfectly with this model.

They don’t need 500 KB of JavaScript to handle basic interactions.

HTML-first frameworks strike a good balance between interactivity and maintainability.

Perfect Use Cases for HTML-First

  1. Administrative dashboards : Tables with pagination, filters, CRUD actions
  2. Internal portals : Document management, workflow approval, reporting
  3. E-commerce backend : Order management, inventory, customers
  4. Complex multi-step forms : Registration wizards, product configurators
  5. Booking systems : Calendars, reservations, slot management
  6. CRM and ERP : Contact management, sales pipeline, invoicing
  7. Data-entry applications : Data import/export, bulk editing

htmx and Delphi: A Winning Combination 🚀

For Delphi developers, the HTML-first approach with htmx represents a particularly interesting opportunity. DelphiMVCFramework offers excellent support for this paradigm, allowing you to leverage the power and reliability of a Delphi backend with a modern, lightweight front-end.

Benefits for Delphi Developers

  1. Robust backend : Delphi’s solidity and performance on the server
  2. Powerful template engines : TemplatePro or WebStencils for dynamic HTML generation
  3. Simple deployment : A single executable without node.js dependencies
  4. No JavaScript build : Goodbye webpack, npm, and complex tool chains
  5. Reusable skills : Delphi developers can be productive immediately

The quickstart projects available on GitHub ( TemplatePro + htmx and WebStencils + htmx ) offer an ideal starting point.

💡 Tip : If you’re a Delphi developer and want to start with htmx, clone one of the quickstart projects and you’ll have a working application in minutes. No npm configuration, no webpack, no 500 MB node_modules.

When NOT to Use the HTML-First Approach ⚖️

It’s important to be honest: the HTML-first approach isn’t the solution for everything. Here’s when traditional SPAs remain the better choice:

  1. Highly interactive real-time applications : Graphic editors, collaboration tools like Google Docs, video editing applications
  2. Offline-first applications : When the application needs to work significantly without connection
  3. Games and 3D applications : Where intensive client-side rendering is necessary
  4. Applications with complex client state : Where the interface state is significantly different from server state

For these types of applications, SPAs offer concrete advantages: sophisticated local state management, fluid transitions between complex views, and the ability to work offline with service workers.

The Future: A Hybrid Approach 🔮

The trend we’re observing isn’t a return to the past, but an evolution toward a more pragmatic approach. The best developers are learning to choose the right tool for the specific problem.

The Islands Architecture Paradigm

An emerging trend is “Islands Architecture”, where most of the page is static or server-rendered HTML, with JavaScript interactivity “islands” only where needed. Frameworks like Astro.js (with an impressive 94% of users who would use it again according to State of JavaScript 2024 ) are exploring this territory.

htmx fits perfectly into this paradigm, allowing you to add interactivity where needed without requiring a completely client-side architecture.

Conclusions 🎯

React and Angular absolutely have their place, and they’re great when you really need a complex client-side application. But for many projects, the HTML-first approach is faster to build, easier to maintain, and lighter on the browser.

It’s not a step backward — it’s a reminder that the web already gives us everything we need to build powerful, responsive applications without the extra weight.

With htmx now having over 72% satisfaction among developers, explosive GitHub star growth, and adoption by increasingly larger teams, it’s clear this isn’t just a passing trend. It’s a rediscovery of fundamental web principles, empowered by modern tools that make the development experience smooth and enjoyable.

The next time you start a new web project, before automatically reaching for React or Vue, ask yourself: “Do I really need all this?” The answer might surprise you.

🚀 Ready to try? Start with a small project or improve a section of an existing application. htmx doesn’t require a complete rewrite — you can adopt it gradually and see the benefits immediately.

Resources to Learn More 📚

🎓 Want to dive deeper with a course? bit Time Professionals offers dedicated courses on htmx and on the DelphiMVCFramework + TemplatePro + htmx integration. Taking a structured course allows you to be productive right away, with practical applications, real examples, proven architectural patterns, and best practices learned from years of field experience. Courses are available in Italian, English, and Spanish.

What I Look For in AI-Assisted PRs

Lobsters
benjamincongdon.me
2025-12-11 20:19:51
Comments...
Original Article

I review a lot of PRs these days. As the job of a PR author becomes easier with AI, the job of a PR reviewer gets harder. 1

AI can “assist” with code review, but I’m less optimistic about AI code review than AI code generation. Sure, Claude/Codex can be quite helpful as a first pass, but code review still requires a large amount of human taste. 2

I care about the high level abstractions my team uses in our codebase, and about how the pieces fit together. I care that our codebase can be intuitively understood by new team members. I care that code is tamper-resistant – that we build things robustly such that imperfect execution in the future doesn’t cause something to blow up. Systems should be decomposable. You should be able to fit all the components of the system in your head in a reasonably faithful mental model, but you shouldn’t need to fit all the implementation details of each component in your head to not cause something to break.

Anyways.

I’ve been trying to speed up my review latency for PRs, and have given some thought to the heuristics I use to evaluate PRs. Heuristics are lossy, of course, but they’re necessary. If you haven’t given this much thought recently, it’s useful to consciously recalibrate the heuristics you use when reviewing code, now that so much code is generated by LLMs.

General Reviewability

  • Did the author provide a detailed & accurate PR description?
  • What level of sensitivity is this code? Is this performance or safety critical code that needs to be reviewed with a fine-tooth comb, line-by-line, or is it something peripheral like an internal UI or CLI that can be “good enough”?
  • Does the change appear reversible ? Is the Git diff of the change human readable? I find LLMs are often really eager to make big changes that clobber the Git diff. Incremental change is usually preferable.
  • Is the PR of an actually reviewable size? My personal bar is: <500 lines is ideal, >1000 lines is borderline unreviewable.

Design & Abstractions

  • If this is greenfield code, does the author seem to be setting up suitable abstractions? Do these abstractions seem like they’ll compose in sane ways? Do the abstractions have reasonable boundaries that do not leak information?
  • Can you zoom out your mind, picture the PR in your head, and “make it make sense” with your mental model of the code? Does it make sense at a conceptual level what is being proposed, or does this just have the veneer of “good code”?
  • Would the code be substantially improved by loading the PR into Claude Code and making a targeted one-sentence prompt? For example: “hey, could you deduplicate some of this logic between classes X and Y and make the Foo trait more modular”. (Fortunately, you can just put this as a review comment – the author will probably rewrite with an LLM anyways)

Vibe Code Smells

  • What amount of effort does it seem like the author has put into their PR? N.b. I mean the author not the AI that wrote the code on behalf of the author. Human effort and curation still leaves signs behind.
  • Did the author leave vibe-coded comments in the PR? (Often this looks like iterative process comments, of the flavor // Now we’ll not use type X anymore, per your feedback .)
  • Are imports (especially for Python and sometimes Rust) splattered around the code, instead of being present at the top?
  • Is there a weird amount of defensive copying/cloning due to a misunderstanding of e.g. immutability-by-default in Scala or how to use ownership/lifetimes in Rust?

Testing

  • For unit tests, do they cover common edge cases? Do the unit tests actually make meaningful assertions that exercise the code in meaningful ways, or are they sloppy assertions that reduce to assert!(true) ?
  • For unit tests, do they have a weird number of extraneous edge cases that are unlikely to ever happen in practice? (Also a vibe code smell)
  • For tests, do the tests mock out dependencies to the point where the entire test is useless/invalid? 3

Error Handling

  • Does the code have a weird level of paranoia about exceptions being thrown?
  • Does the code silently swallow errors with try/catch?
  • Does the code allow for exceptions/panics in areas where the code absolutely should not panic?

None of these are intended to be knocks against individual PR authors. It’s useful to assume positive intent when reviewing code. SWEs are under various pressures, visible and invisible. The “system” we have today, broadly defined, results in much more code being produced by non-humans. The best source of truth for human coding taste is still, for now, humans. Therefore, humans still need to review a lot of non-human code, as we collectively chip away at the pieces of code taste that can be incorporated back into model intuition.

Installing Every NixOS Package

Lobsters
unnamed.website
2025-12-11 19:48:06
Comments...
Original Article

Four years ago, I installed every Arch Linux package (and someone else tried the same crazy experiment with Alpine Linux . But did I really accomplish my goal? Wellll… I slightly cheated, since I only installed non-conflicting packages rather than literally every single Arch package. Besides, installing conflicting packages would overwrite one package’s files with the other’s, so they wouldn’t really both be installed.

The solution, as with any Linux problem, is NixOS. With NixOS, conflicting packages or even multiple versions of the same package can coexist in the Nix store, which are then activated into your environment via symlinks, hacked together with some nasty Bash and Perl scripts . Hooray for purely functional package management! Even better, search.nixos.org boasts a total of more than 120000 packages, nearly 10 times the measly 12232 packages that I installed in my Arch experiment.

But how many packages exactly are there in Nixpkgs? Is it really possible to install every one of them? Will I need 10 times more disk space than for Arch? How usable will the system be with every NixOS package installed? Will it even boot? Let’s find out!

Attempt 1

First, I used nixos-install to make a fresh new NixOS systemd-nspawn container (to avoid inflicting unnecessary pain on any existing NixOS installations). Here are the container-specific lines in the new system’s configuration.nix :

boot = {
  isNspawnContainer = true;
  loader.initScript.enable = true; # Creates /sbin/init
};
console.enable = true;

Next, I booted up the container with sudo systemd-nspawn -bD /mnt . Time to install every package! The pkgs attrset contains the derivations of every package, so it should be as simple as this, right?

environment.systemPackages = builtins.attrValues pkgs;

Nope! Shielded by lazy evaluation, most NixOS users have never witnessed the horrors that scuttle in the nooks and crannies of pkgs . They’ve never heard the scream of pkgs.AAAAAASomeThingsFailToEvaluate or gotten lost in the 200-line stack trace of pkgs.coqPackages_8_11.hierarchy-builder or OOMed from the recursive terror of pkgs.haskell.packages.ghc910.buildHaskellPackages.generateOptparseApplicativeCompletions.scope.AAAAAASomeThingsFailToEvaluate or raged at the stupidity of using both . in names and separators between names in pkgs.kernelPatches.hardened.6.12.patch . Don’t underestimate pkgs .

Attempt 2

Yikes… maybe I should try solving an easier version of the problem first. Instead of recursively traversing pkgs , how about only installing every top-level package? For instance pkgs.fish would be included, but not pkgs.kdePackages.dolphin . To filter out packages marked as broken, insecure, or proprietary, the trick is to check (builtins.tryEval x.value.drvPath).success :

environment.systemPackages = builtins.attrValues pkgs
  |> builtins.map (
    x:
    if
      (builtins.tryEval x).success
      && builtins.isAttrs x
      && x ? "drvPath"
      && (builtins.tryEval x.value.drvPath).success
    then
      x
    else
      [ ]
  )
  |> lib.lists.flatten;

One nixos-rebuild switch later:

error: Cannot build '/nix/store/jbmc3rplwpm2xilpxdzw528i4v9lcb5b-liquidfun-1.1.0.tar.gz.drv'.
       Reason: builder failed with exit code 1.
       Output paths:
         /nix/store/4v3q7n13vb9w7qsrf5an4kz8q0f76lzl-liquidfun-1.1.0.tar.gz
       Last 11 log lines:
       >
       > ***
       > Unfortunately, we cannot download file liquidfun-1.1.0.tar.gz automatically.
       > Please go to https://github.com/google/liquidfun/releases/download/v1.1.0/liquidfun-1.1.0 to download it yourself, and add it to the Nix store
       > using either
       >   nix-store --add-fixed sha256 liquidfun-1.1.0.tar.gz
       > or
       >   nix-prefetch-url --type sha256 file:///path/to/liquidfun-1.1.0.tar.gz
       >
       > ***
       >
       For full logs, run:
         nix log /nix/store/jbmc3rplwpm2xilpxdzw528i4v9lcb5b-liquidfun-1.1.0.tar.gz.drv
error: Cannot build '/nix/store/v1yh7qh1njidc99wmfgjx3mckqm1hqmg-liquidfun-1.1.0.drv'.
       Reason: 1 dependency failed.
       Output paths:
         /nix/store/ib63b5pgri9yh6y6z9y9cvw0w2xqvmpb-liquidfun-1.1.0

Fun. So turns out pkgs contains quite a few packages that aren’t marked as broken but still fail to build, so we need to filter them out somehow. Untrusem from exozyme suggested using Hydra, the Nix CI system. For each update to nixos-unstable , hydra.nixos.org conveniently lists which packages it failed to build . Hydra has an API, but I found it easier to simply Ctrl-select HTML table entries and parse that. I additionally had to filter out a few more broken packages which are marked as hydraPlatforms = [ ]; .

This still didn’t fully build due to some pkgs.buildEnv errors, but it’s progress!

Attempt 3

After messing around in a nix repl .#nixosConfigurations.$hostname , I finally figured out how to recursively traverse pkgs with this monstrosity:

let
  pkgs = import <nixpkgs> { };
  lib = pkgs.lib;
  getpkgs =
    y: a: b:
    builtins.map (
      x:
      if
        (builtins.tryEval x.value).success
        && builtins.isAttrs x.value
        && !lib.strings.hasPrefix "pkgs" x.name # Ignore pkgs.pkgs*
        && !lib.strings.hasPrefix "__" x.name # Bad stuff
        && x.name != y.name # Definitely infinite recursion
        && x.name != "lib" # Ignore pkgs.lib, pkgs.agdaPackages.lib, and other evil stuff
        && x.name != "override" # Doesn't contain packages
        && x.name != "buildPackages" # Another copy of pkgs
        && x.name != "targetPackages" # Yet another copy of pkgs
        && x.name != "formats" # Ignore the pkgs.formats library
        && x.name != "tests" # Ignore tests
        && x.name != "nixosTests" # Ignore more tests
        && x.name != "scope" # Ignore pkgs.haskell.packages.ghc910.buildHaskellPackages.generateOptparseApplicativeCompletions.scope which contains another copy of pkgs
        && x.name != "_cuda" # Proprietary garbage
        && x.name != "vmTools" # Not a VM
        && x.name != "ghc902Binary" # Broken
        && x.name != "coqPackages_8_11" # Broken
        && x.name != "coqPackages_8_12" # Broken
        && x.name != "pypyPackages" # Broken
        && x.name != "pypy2Packages" # Broken
        && x.name != "pypy27Packages" # Broken
      then
        (
          if x.value ? "drvPath" then
            (
              if (builtins.tryEval x.value.drvPath).success then
                b
                + "|"
                + x.name
                + " "
                + (if x.value ? "pname" then x.value.pname else "unnamed")
                + " "
                + x.value.outPath
              else
                [ ]
            )
          else if a > 10 then
            abort "Probably infinite loop?"
          else
            builtins.trace a
            <| builtins.trace x.name
            <| builtins.trace b
            <| getpkgs x (a + 1)
            # For some stupid reason x.name can contain . so use | as the separator instead
            <| b + "|" + x.name
        )
      else
        [ ]
    )
    <| lib.attrsToList y.value;
in
lib.strings.concatStringsSep "\n"
<| lib.lists.flatten
<| getpkgs {
  name = "pkgs";
  value = pkgs;
} 0 "pkgs"

And here’s how to run it:

nix eval -f getpkgs.nix | sed 's/\\\\n/\\n/g' | sed 's/"//g' > all

The script took 10 minutes and 48 GB of RAM and finally spat out a 623946-line file with every Nix package. 600K? That’s 50 times more packages than my Arch experiment, and 5 times as many as search.nixos.org claimed!

Actually, many of the outPath s are duplicated, so the true count is much lower. Since I’ve suffered enough trauma already from programming in Nix, I switched over to using the best language ever for filtering the package list. First, I deduplicated the file:

import Std.Data.HashSet

def main : List String → IO Unit
  | [inp, out] => do
    let lines := (← IO.FS.readFile inp).splitOn "\n"
    let h ← IO.FS.Handle.mk out IO.FS.Mode.write
    let mut added := Std.HashSet.emptyWithCapacity
    for line in lines do
      match line.splitOn " " with
      | [_, _, store] =>
        if ¬added.contains store then
          added := added.insert store
          h.putStrLn line
      | _ => return ()
  | _ => IO.println "Wrong number of args"

Down to only 281260 packages now!

The next task was to filter out broken packages. Unfortunately, some namespaces like pkgs.python310Packages and pkgs.haskell.packages.native-bignum.ghc910 are skipped by Hydra but contain tons of packages that fail to build. Since I’d rather not spend a month compiling thousands of semi-broken packages, I adjusted my goal to be installing every package in the official cache.nixos.org binary cache. So yeah, technically I’m slightly cheating once again but whatever.

I now had two options:

  1. Use Hydra’s list of successfully built packages. However, that list is really long so my copy-and-paste trick won’t work and I’ll have to use Hydra’s API instead.
  2. Directly query cache.nixos.org.

The second option seemed slightly easier, so I hacked together a program using the best language ever and its Curl bindings :

import Curl

open Curl

def N := 1024

def main : IO Unit := do
  try
    let mut lines := (← IO.FS.readFile "dedup").splitOn "\n"
    let h ← IO.FS.Handle.mk "cached" IO.FS.Mode.write

    while 0 < lines.length do
      IO.println s!"{lines.length} lines remaining"
      let batch := lines.take N
      lines := lines.drop N

      let curlM ← curl_multi_init
      let resps ← batch.filterMapM fun line ↦ do
        match line.splitOn " " with
        | [_, _, store] =>
          let hash := store.stripPrefix "/nix/store/" |>.takeWhile (· ≠ '-')
          let resp ← IO.mkRef { : IO.FS.Stream.Buffer}
          let curl ← curl_easy_init
          curl_set_option curl <| CurlOption.URL s!"https://cache.nixos.org/{hash}.narinfo"
          curl_set_option curl <| CurlOption.NOBODY 1
          curl_set_option curl <| CurlOption.HEADERDATA resp
          curl_set_option curl <| CurlOption.HEADERFUNCTION writeBytes
          curl_multi_add_handle curlM curl
          return some (resp, line)
        | _ => return none

      while 0 < (← curl_multi_perform curlM) do
        curl_multi_poll curlM 100

      let good_lines ← resps.filterMapM fun (resp, line)do
        let bytes := (← resp.get).data
        return if _ : bytes.size > 9 then
          -- Check if bytes starts with "HTTP/2 200"
          if bytes[7] = 50 ∧ bytes[8] = 48 ∧ bytes[9] = 48 then
            some line
          else
            none
        else
          none

      -- Don't print out extra newline if empty
      if 0 < good_lines.length then
        h.putStrLn <| "\n".intercalate good_lines
        h.flush
  catch e => IO.println s!"error: {e}"

This program took around 5 minutes to run since I didn’t try optimizing it very much. Not bad, considering that it makes 281260 network requests!

Now we’re left with only 71480 packages. I wrote a third program to generate the configuration.nix containing that list of packages:

def main := do
  let lines := (← IO.FS.readFile "cached").splitOn "\n" |>.filterMap fun (line : String)    match line.splitOn " " with
    | [path, pname, _] =>
      match path.splitOn "|" with
      | .nil => none
      | part :: parts =>
        -- Quote each part because it might contain weird characters
        some <| part ++ ".\"" ++ "\".\"".intercalate parts ++ "\""
    | _ => none
  IO.println s!"Final count: {lines.length}"
  let template ← IO.FS.readFile "configuration-template.nix"
  IO.FS.writeFile "configuration.nix" <| template.replace "999999" <| "\n".intercalate lines

Finally, time for the moment of truth! nixos-rebuild switch !

400 GB of downloads later, I was greeted with some more fun errors:

pkgs.buildEnv error: /mnt/nix/store/p10qdg9nz84pl9fhskqs5pj6q28i90z2-all-cabal-hashes-598216f.tar.gz is a file and can't be merged into an environment using pkgs.buildEnv!

This error occurs when nixos/modules/config/system-path.nix calls pkgs/build-support/buildenv/builder.pl to build the global environment, but we can pass the parameter ignoreSingleFileOutputs = true; to ignore this error. The NixOS module system is pretty cryptic to me, so I asked my friend Ersei from exozyme for help with overriding a module.

  1. Make a copy of system-path.nix with the edit I want.
  2. Add ./system-path.nix to the modules list in flake.nix .
  3. Disable the original module with disabledModules = [ "config/system-path.nix" ]; in configuration.nix .

Unfortunately, pkgs.buildEnv still wasn’t happy:

pkgs.buildEnv error: not a directory: `/nix/store/dv8q5wfm717q4qqk6pps6dyiysqqf22c-gci-0.13.7/bin/internal'

Basically, some package created the directory bin/internal , but now gci is trying to create a file at the same location. There’s probably a smarter way, but my solution was to remove gci from the list and repeatedly run nixos-rebuild to catch and remove all the other offenders. OK yeah yeah I’m cheating again slightly, I’m only omitting a few more packages that no one cares about.

match pname with
| "hypothesis" -- Cannot build '/nix/store/4k1n7lbmkihdf1jisahqrjl3k4cbmhgf-hypothesis-6.136.9-doc.drv'
| "gci" -- pkgs.buildEnv error: not a directory: `/nix/store/dv8q5wfm717q4qqk6pps6dyiysqqf22c-gci-0.13.7/bin/internal'
| "openjdk-minimal-jre" -- pkgs.buildEnv error: not a directory: `/nix/store/zmcc5baw10kbhkh86b95rhdq467s9cy1-openjdk-minimal-jre-11.0.29+7/lib/modules'
| "olvid" -- pkgs.buildEnv error: not a directory: `/nix/store/xlmxn65dw2nvxbpgvfkb4nivnk7gd2n2-olvid-2.5.1/lib/modules'
| "resources" -- pkgs.buildEnv error: not a directory: `/nix/store/36ywzl3hqg57ha341af8wpvgbwga90b8-resources-1.9.0/bin/resources'
| "temurin-bin" -- pkgs.buildEnv error: not a directory: `/nix/store/6v1bb758sn4fh2i27msxni2sy07ypqrr-temurin-bin-11.0.28/lib/modules'
| "temurin-jre-bin" -- pkgs.buildEnv error: not a directory: `/nix/store/kj0hj48hm4psjrpsa2lsd5lhnax6v9p6-temurin-jre-bin-21.0.8/lib/modules'
| "semeru-bin" -- pkgs.buildEnv error: not a directory: `/nix/store/chn2lx7gnvd3ay5x8gmnn774gw1yafv0-semeru-bin-21.0.3/lib/modules'
| "semeru-jre-bin" -- pkgs.buildEnv error: not a directory: `/nix/store/hbazadpm1x0a2nkg686796387d14539r-semeru-jre-bin-21.0.3/lib/modules'
| "zulu-ca-jdk" -- pkgs.buildEnv error: not a directory: `/nix/store/z16qj0jgm9vffy8m6533rxjzw904f7c1-zulu-ca-jdk-21.0.8/lib/modules'
| "graalvm-ce" -- pkgs.buildEnv error: not a directory: `/nix/store/1ywfss0i4rpdiyzydvqd43ahsjvq2jk6-graalvm-ce-25.0.0/lib/modules'
| "pax" -- pkgs.buildEnv error: not a directory: `/nix/store/n0w0kkr796jyiq16kff4cq4vfrnb9n9i-pax-20240817/bin/pax
| "discord-haskell" -- pkgs.buildEnv error: not a directory: `/nix/store/hrx6j0ld4dly5g75dalipa7s36pjndk9-discord-haskell-1.18.0/bin/cache
  => none
| _ =>
  if path.endsWith "bsd|source" then
    -- Either pkgs|openbsd|source or pkgs|netbsd|source which both clobber shadow's /sbin/nologin and excluding shadow may break the system
    none
  else
    match path.splitOn "|" with
    | .nil => none
    | part :: parts =>
      -- Quote each part because it might contain weird characters
      some <| part ++ ".\"" ++ "\".\"".intercalate parts ++ "\""

And now for the real moment of truth! nixos-rebuild switch !

env: ‘php’: No such file or directory
error: your NixOS configuration path seems to be missing essential files.
To avoid corrupting your current NixOS installation, the activation will abort.

This could be caused by Nix bug: https://github.com/NixOS/nix/issues/13367.
This is the evaluated NixOS configuration path: /nix/store/vqrhc9rg3vahbpxbjyb5ribnxq1sz2av-nixos-system-nixos-26.05.20251205.f61125a.
Change the directory to somewhere else (e.g., `cd $HOME`) before trying again.

If you think this is a mistake, you can set the environment variable
NIXOS_REBUILD_I_UNDERSTAND_THE_CONSEQUENCES_PLEASE_BREAK_MY_SYSTEM to 1
and re-run the command to continue.
Please open an issue if this is the case.

Uh… looks like even nixos-rebuild is terrified! Fine then, break my system. NIXOS_REBUILD_I_UNDERSTAND_THE_CONSEQUENCES_PLEASE_BREAK_MY_SYSTEM nixos-rebuild switch !

And as promised, it did in fact break my system by changing my shell from Fish to a Go Fish game, so now I can’t even log in!!!

Wait, why am I panicking? This is NixOS! I can just roll back by editing /sbin/init to boot a different system generation. So, I booted an older generation, changed my shell in configuration.nix to Bash, and after one last rebuild it’s finally time to enjoy NixOS at its heaviest!

          ▗▄▄▄       ▗▄▄▄▄    ▄▄▄▖
          ▜███▙       ▜███▙  ▟███▛
           ▜███▙       ▜███▙▟███▛
            ▜███▙       ▜██████▛
     ▟█████████████████▙ ▜████▛     ▟▙
    ▟███████████████████▙ ▜███▙    ▟██▙
           ▄▄▄▄▖           ▜███▙  ▟███▛
          ▟███▛             ▜██▛ ▟███▛
         ▟███▛               ▜▛ ▟███▛
▟███████████▛                  ▟██████████▙
▜██████████▛                  ▟███████████▛
      ▟███▛ ▟▙               ▟███▛
     ▟███▛ ▟██▙             ▟███▛
    ▟███▛  ▜███▙           ▝▀▀▀▀
    ▜██▛    ▜███▙ ▜██████████████████▛
     ▜▛     ▟████▙ ▜████████████████▛
           ▟██████▙       ▜███▙
          ▟███▛▜███▙       ▜███▙
         ▟███▛  ▜███▙       ▜███▙
         ▝▀▀▀    ▀▀▀▀▘       ▀▀▀▘
root@nixos
----------
OS: NixOS 26.05 (Yarara) x86_64
Host: MS-7C94 (1.0)
Kernel: Linux 6.17.9-300.fc43.x86_64
Uptime: 2 days, 23 hours, 22 mins
Packages: 69700 (nix-system)
Shell: fish 4.2.1
Display (DELL U2515H): 2560x1440 in 25", 60 Hz [External]
Terminal: xterm-256color
CPU: AMD Ryzen 9 3950X (32) @ 4.76 GHz
GPU: AMD Radeon RX 7900 XTX [Discrete]
Memory: 5.81 GiB / 62.70 GiB (9%)
Swap: 1.43 GiB / 8.00 GiB (18%)
Disk (/): 644.04 GiB / 1.82 TiB (35%) - btrfs
Local IP (enp42s0): 10.187.0.152/21
Locale: en_US.UTF-8

                        
                        

I generated that using fastfetch --pipe 0 | aha which of course are already installed. Fastfetch’s getNixPackagesImpl actually times out, but in theory it would return 69700 which is a bit of scam since I obviously installed 71440 packages. Either way, it’s still more than 5 times the number of packages I installed on Arch!

The heaviest objects in the universe meme but it’s my /nix/store

The Nix store is 1.34 TB with 744954 paths, but fortunately only takes up 671 GB on disk thanks to the magic of btrfs compression. Honestly, the total size of Nixpkgs seems smaller than I expected, and the total network usage of this experiment was about the same as a few people using NixOS for a year.

Surprisingly, the system boots quickly and just feels like a boring (although very obese) NixOS system. There’s only a few oddities, such as that ls uses the 9base implementation rather than the one from coreutils. I have no idea what I should do with this system now.

Anyways, check out this repo for all my code and more. It should be completely reproducible if you want to try out the crazy configuration.nix too (but please don’t if you value your sanity and disk space). Lastly, thanks again to exozyme and the other exozyme members for answering all my silly Nix questions!

Pop!_OS 24.04 LTS Released: A Letter From Our Founder

Lobsters
blog.system76.com
2025-12-11 19:37:35
Comments...
Original Article

epoch /ĕp′ək, ē′pŏk″/
noun
the beginning of a distinctive period in the history of someone or something.

Pop!_OS 24.04 LTS with COSMIC Desktop Environment Epoch 1

If you’re ambitious enough, or maybe just crazy enough, there eventually comes a time when you realize you’ve reached the limits of current potential, and must create something completely new if you’re to go further.

This year, System76 turned twenty. For twenty years we have shipped Linux computers. For seven years we’ve built the Pop!_OS Linux distribution. Three years ago it became clear we had reached the limit of our current potential and had to create something new. Today, we break through that limit with the release of Pop!_OS 24.04 LTS with the COSMIC Desktop Environment.

Today is special not only in that it’s the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source community. We’re proud of this contribution to the open source ecosystem. COSMIC is built on the ethos that the best open source projects enable people to not only use them, but to build with them. COSMIC is modular and composable. It’s the flagship experience for Pop!_OS in its own way, and can be adapted by anyone that wants to build their own unique user experience for Linux.

Thank you for your patience while we built COSMIC. We know it’s been a long ride, and appreciate you taking it with us. Pop!_OS 24.04 LTS doesn’t feel like moving forward three years. It feels like leaping forward a decade. This day marks the foundation for our next twenty years and the first of what will be many rapid innovations.

And thank you to System76 customers. COSMIC is entirely funded by System76 hardware sales. Not only are you getting the best Linux hardware on the planet, you're investing in the future of the Linux desktop that we all love.

I hope you love what we’ve built for you. Now go out there and create. Push the limits, make incredible things, and have fun doing it!

Carl Richell
Founder and CEO
System76

Download Pop!_OS 24.04 LTS

Download Pop!_OS 24.04 LTS at https://system76.com/pop/

Pop!_OS 24.04 LTS Highlights

  • New COSMIC Desktop Environment Experience
  • New Pop!_OS 24.04 LTS for ARM computers
    ◦ Officially supported on the System76 Thelio Astra desktop
    ◦ Community support for non-System76 hardware enabled by Tow-Boot
  • New hybrid graphics support for longer battery life
    ◦ No need to change modes
    ◦ Apps that request the discrete GPU will automatically run on the correct GPU
    ◦ Manually run an app on your preferred GPU by right-clicking on the app icon
  • Easy installation with full disk encryption
  • Refresh install feature by holding Space at boot or from the ISO
    ◦ Reinstall the OS anytime while keeping files, settings, and Flatpak user applications
    ◦ The Refresh feature will also arrive in COSMIC Settings after release
  • Broad hardware support

COSMIC Desktop Environment Highlights

  • Intuitive window tiling that can be used with the mouse or keyboard
    ◦ Activate tiling with a simple toggle in the panel
    ◦ Tiling per workspace and per display
    ◦ Easy to learn keyboard shortcuts
    ◦ Rearrange windows by dragging them with the mouse. Visual hints show where the window will land.
  • Featureful Workspaces
    ◦ Horizontal or vertical workspaces
    ◦ Workspace per display or spanned across displays
    ◦ Drag workspaces to re-arrange them or move an entire workspace to a different display
    ◦ Pin workspaces so they’re never removed
    ◦ Workspace settings are persistent. A tiled and pinned workspace will be tiled and pinned after reboot.
    ◦ Add the “Numbered Workspaces” applet to your Panel or Dock to always see the workspace number you’re on
  • Smooth multi-display experience
    ◦ Effortlessly mix and match hi-dpi and standard resolution displays
    ◦ Displays are automatically scaled based on pixel density and display scaling can be fine-tuned in Settings
    ◦ Display settings are persistent. Plug in the same display and its settings will return.
    ◦ Unplug a display and the windows on that display will move to a new workspace on a remaining display
  • Customization galore
    ◦ Theme your desktop from Settings with easy color pickers
    ◦ Setup your personalized layout
    ••• Panel + Dock or single Panel
    ••• Put the Panel and Dock on any screen edge
    ◦ Add and arrange features (Applets) on the Panel or Dock from Settings
  • Fast desktop navigation and easy keyboard shortcuts
    ◦ Press the Super (or the Windows key) to activate the Launcher
    ••• Type the app you want and press enter
    ••• Type the name of an open app then enter to switch to it
    ••• Type ? to learn more features like web and GitHub search, calculator, and running commands directly from the launcher
    ◦ Use the same keyboard shortcuts to focus or arrange windows across workspaces and displays
    • Super+Arrows to change the focused window. Super+Shift+Arrows to move windows.
  • Stacks, snapping, and sticky windows
    ◦ Stack windows to combine them into tab groups like a browser
    ••• Right click on the header and choose Create Window Stack. Then drag another window to the stack.
    ••• When tiling windows, simply drag the window on top of another to create a stack.
    ◦ When using floating (non-tiled) windows, drag them to the display edges to snap them into a quarter or half tile
    ◦ Make a window stay on top of other windows and follow you around to other workspaces or displays by right-clicking the header and choosing  “Sticky window”
  • Faster, more responsive applications
    ◦ COSMIC Files
    ◦ COSMIC Store
    ◦ COSMIC Terminal
    ◦ COSMIC Text Editor
    ◦ COSMIC Media Player
    ◦ COSMIC Screenshot
    ◦ Install more apps Made for COSMIC from COSMIC Store
  • Written in the Rust programming language
    ◦ For memory safety and high performance


Download Pop!_OS 24.04 LTS

Download Pop!_OS 24.04 LTS at https://system76.com/pop/

Upgrading from Pop!_OS 22.04 LTS to Pop!_OS 24.04 LTS

Pop!_OS 22.04 LTS users will receive an upgrade notification in the OS starting January 2026. If you wish to upgrade to Pop!_OS 24.04 LTS before then, after backing up your files, open Terminal and run

pop-upgrade release upgrade -f

Pop!_OS 24.04 LTS released

Linux Weekly News
lwn.net
2025-12-11 19:31:04
Version 24.04 LTS of the Ubuntu-based Pop!_OS distribution has been released with the COSMIC Desktop Environment: Today is special not only in that it's the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source commu...
Original Article

Version 24.04 LTS of the Ubuntu-based Pop!_OS distribution has been released with the COSMIC Desktop Environment:

Today is special not only in that it's the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source community. We're proud of this contribution to the open source ecosystem. COSMIC is built on the ethos that the best open source projects enable people to not only use them, but to build with them. COSMIC is modular and composable. It's the flagship experience for Pop!_OS in its own way, and can be adapted by anyone that wants to build their own unique user experience for Linux.

In addition to the COSMIC desktop environment, Pop!_OS is now available for Arm computers with the 24.04 LTS release, and the distribution has added hybrid graphics support for better battery life. LWN covered an alpha version of COSMIC in August 2024.



My productivity app is a never-ending .txt file (2022)

Hacker News
jeffhuang.com
2025-12-11 19:30:58
Comments...
Original Article

Over 14 years of todos recorded in text

By Jeff Huang , updated on 2022-03-21

The biggest transition for me when I started college was learning to get organized. There was a point when I couldn't just remember everything in my head. And having to constantly keep track of things was distracting me from whatever task I was doing at the moment.

So I tried various forms of todo lists, task trackers, and productivity apps. They were all discouraging because the things to do kept getting longer, and there were too many interrelated things like past meeting notes, calendar appointments, idea lists, and lab notebooks, which were all on different systems.

I gave up and started just tracking in a single text file and have been using it as my main productivity system for 14 years now. It is so essential to my work now, and has surprisingly scaled with a growing set of responsibilities, that I wanted to share this system. It's been my secret weapon.

Prerequisite : A calendar. The one outside tool I use is an online calendar, and I put everything on this calendar, even things that aren't actually for a fixed time like "make a coffee table at the workshop" or "figure out how to recruit new PhD students" — I'll schedule them on a date when I want to think about it. That way all my future plans and schedule are together, and not a bunch of lists I have to keep track of.

Making the Daily List : Every night before I go to bed, I take all the items on my calendar for the next day and append it to the end of the text file as a daily todo list, so I know exactly what I'm doing when I wake up. This list contains scheduled tasks (2pm meeting with Madonna, 4pm office hours), errands (sign a form, return a book), and work items (review a paper, prepare a presentation). It also lets me think about whether I've got the right amount of work for a day.

Anything I don't want to do tomorrow, I'll shuffle back into my calendar on later dates. If the task is too big, I'll break it down into a piece for tomorrow, and the rest for another date. After years of doing this, I've gotten pretty good at estimating what I can finish in a day. Here's an example with names replaced so you can see what it looks like when I move a day's schedule from my calendar.

2021-11-31 11am meet with Head TAs - where are things at with inviting portfolio reviewers? 11:30am meet with student Enya (interested in research) review and release A/B Testing assignment grading 12pm HCI group meeting - vote for lab snacks send reminders for CHI external reviewers read Sketchy draft Zelda pick up eye tracker - have her sign for it update biosketch for Co-PI 3:15pm join call with Umbrella Corp and industry partnership staff 3:45pm advising meet with Oprah 4pm Rihanna talk (368 CIT) 5pm 1:1 with Beyonce #phdadvisee 6pm faculty interview dinner with Madonna

As a Record : That daily todo list is where I also take notes, so it's a to do list that turns into a what done list. The best thing about these daily lists is I keep them all in a single text file separated by dates, so I have a record of everything I have ever done and when I did it.

My current file was created 9 years ago when I started my current job. It serves as a research notebook, and as meeting minutes. I have 51,690 handwritten lines in one file now, documenting everything I have done as a professor, and nearly every person I have met with, along with notes about what we discussed or ideas I had. Here's what my list looks like at the end of the day, representing work accomplished.

2021-11-31 11am meet with Head TAs - where are things at with inviting portfolio reviewers? A: got 7/29 replies - need 3 TAs for Thursday lab - Redesign assignment handout will be done by Monday, ship Thursday 11:30am meet with student Enya (interested in research) - they're a little inexperienced, suggested applying next year review and release A/B Testing assignment grading 12pm HCI group meeting - automatically generate thumbnails from zoom behavior on web pages - #idea subliminal audio that leads you to dream about websites - Eminem presenting Nov 24 - vote for lab snacks. A: popcorn and seaweed thing got unofficial notification ARO YIP funding award #annual #cv read Sketchy paper draft - needs 1 more revision - send to Gandalf to look at? Zelda pick up eye tracker - have her sign for it update biosketch for Co-PI unexpected drop in from Coolio! #alumni - now a PM working on TravelAdvisor, thinking about applying to grad school 3:15pm join call with Umbrella Corp and industry partnership staff - they want to hire 20 data science + SWE interns (year 3), 4 alums there as SWE 3:45pm advising meet with Oprah - enjoyed CS 33 - interning at Facebook 4pm Rihanna talk (368 CIT) 5pm 1:1 with Beyonce #phdadvisee - stuck on random graph generating crash - monitor memory/swap/disk? - ask Mario to help? - got internship at MSR with Cher - start May 15 or 22 - will send me study design outline before next meeting - interviewing Spartacus as potential RA for next semester 6pm faculty interview dinner with Madonna (Gracie's) - ask about connection with computer vision - cool visual+audio unsupervised comparison, thoughtful about missing data, would work with ugrads (?), likes biking, teach compvis + graphics - vote #HIRE #note maybe visit Monsters University next spring, Bono does related work

Shortcuts and Features : I use a consistent writing style so things are easily searchable, with a few shorthands. When I search for "meet with", it shows that I have had over 3,000 scheduled meetings. I have some tags like #idea for new ideas to revisit when I want project ideas, #annual for things to put on my next annual report, #nextui for things to add the next time I run my next UI course.

A text file is incredibly flexible, and at any point, I can quickly glance to see what I've done that day and what's left. I usually keep an empty line between tasks completed and upcoming tasks. When a task is completed, I move the empty line. Any leftover tasks from the current day can go back into the calendar for when I may want to tackle it again, but that is rare because tasks were already sized into what I can do on that day. I can calculate aggregate statistics using the search box, or list all the lines containing a tag, and other operations using my text editor. I use Ultraedit because I'm familiar with it, but any text editor would have similar capabilities.

Email : Email is obviously a part of my workflow. Everyone has all sorts of productivity advice about handling it, but I find a simple flagging system is sufficient — flag Red if it's something I need to deal with, flag Orange if I need to deal with it eventually but requires some thinking or someone else to handle it, and flag Yellow for emails I send that I am waiting on a reply for, so I know to follow up later. I'll flag emails as they come in, whenever it's convenient.

At the end of the day, I'll do a quick review of the Orange and Yellows to see if any need to be followed up or should become Red. Some peoples' workflows revolve around obsessively cleaning their Inbox. I don't really care about keeping my inbox empty because then I feel like I have new work to do whenever email comes in.

So my daily routine looks like

  1. look at the daily todo list I wrote last night to find out what I'm doing today
  2. do scheduled things on that list during the day
  3. when I have free (unscheduled) time, do the floating tasks on my list and work on Red-flagged emails
  4. at the end of the day
  5. do a quick review of Orange/Yellow emails to see if they need any handling
  6. copy the next day's calendar items to the bottom of the text file

This process has a few nice properties :

  • It's easy to immediately see what to do when I wake up
  • I don't need to remember in my head the things to do later (following up on emails, future tasks)
  • It's easy to recall what happened in the past and see how much I can actually accomplish in a day
  • There's no running "todo" list with items that keep pushed back day after day
  • I use Remote Desktop so everything is accessible from every device

My daily workload is completely under my control the night before; whenever I feel overwhelmed with my long-term commitments, I reduce it by aggressively unflagging emails, removing items from my calendar that I am no longer excited about doing, and reducing how much work I assign myself in the future.

It does mean sometimes I miss some questions or don't pursue an interesting research question, but helps me maintain a manageable workload.

So that's it. I would love to hear from you if you try my system, or have some ideas about it!

An SVG is all you need

Hacker News
jon.recoil.org
2025-12-11 19:25:14
Comments...
Original Article

SVGs are pretty cool - vector graphics in a simple XML format. They are supported on just about every device and platform, are crisp on every display, and can have embedded scripts in to make them interactive. They're way more capable than many people realise, and I think we can capitalise on some of that unrealised potential.

Anil's recent post Four Ps for Building Massive Collective Knowledge Systems got me thinking about the permanence of the experimentation that underlies our scientific papers. In my idealistic vision of how scientific publishing should work, each paper would be accompanied by a fully interactive environment where the reader could explore the data, rerun the experiments, tweak the parameters, and see how the results changed. Obviously we can't do this in the general case - some experiments are just too expensive or time-consuming to rerun on demand. But for many papers, especially in computer science, this is entirely feasible.

That line of thought reminded me of a project I tackled as a post-doc in the Department of Plant Sciences here in Cambridge. I was writing a paper on synergy in fungal networks and built a tiny SVG visualisation tool that let readers wander through the raw data captured from a real fungal network growing in a petri dish. I dug it up recently and was surprised (and delighted) to see that it still works perfectly in modern browsers - even though the original “cover page” suggested Firefox 1.5 or the Adobe SVG plug-in (!). Give it a spin; click the 'forward', 'back' and other buttons below the petri dish!

And that, dear reader, is literally all you need. A completely self-contained SVG file can either fetch data from a versioned repository or embed the data directly, as the example does. It can process that data, generate visualisations, and render knobs and sliders for interactive exploration. No server-side magic required - everything runs client-side in the browser, served by a plain static web server, and very easily to share.

How does it fit in with Anil's four Ps?

  • Permanence: SVGs can be assigned DOIs just like papers, blog posts, or datasets. The fact that the above SVG still works after two decades is a testament to the durability of the format.
  • Provenance: Because SVG is plain text, it plays nicely with version control systems such as Git. When an SVG pulls in external data, the same provenance-tracking strategies Anil describes for datasets apply here as well.
  • Permission: Once again, with the separation between the processing in the SVG and that data that it works on, the same permissioning models apply as for data in general.
  • Placement: SVGs are inherently spatial; it's very easy, for example, to make beautiful world maps with SVG.

The SVG above is only a visualisation tool for data; it doesn't really do any processing, but it certainly could . The biggest change that's happened over the 20 years since I wrote this is the massive increase in the computation power available in the browser. If would be entirely feasible to implement the entire data analysis pipeline for that paper in an SVG today, probably without even spinning up the fans on my laptop!

So this is yet another tool in our ongoing effort to be able to effortlessly share and remix our work - added to the pile of Jupyter notebooks, Marimo botebooks , the slipshow / x-ocaml combination , Patrick's take on Jon Sterling's Forester , my own notebooks , and many others - and this is a subset of what we're using just in our own group!

Pop!_OS 24.04 LTS with Cosmic Desktop Environment Released

Hacker News
blog.system76.com
2025-12-11 19:03:45
Comments...
Original Article

epoch /ĕp′ək, ē′pŏk″/
noun
the beginning of a distinctive period in the history of someone or something.

Pop!_OS 24.04 LTS with COSMIC Desktop Environment Epoch 1

If you’re ambitious enough, or maybe just crazy enough, there eventually comes a time when you realize you’ve reached the limits of current potential, and must create something completely new if you’re to go further.

This year, System76 turned twenty. For twenty years we have shipped Linux computers. For seven years we’ve built the Pop!_OS Linux distribution. Three years ago it became clear we had reached the limit of our current potential and had to create something new. Today, we break through that limit with the release of Pop!_OS 24.04 LTS with the COSMIC Desktop Environment.

Today is special not only in that it’s the culmination of over three years of work, but even more so in that System76 has built a complete desktop environment for the open source community. We’re proud of this contribution to the open source ecosystem. COSMIC is built on the ethos that the best open source projects enable people to not only use them, but to build with them. COSMIC is modular and composable. It’s the flagship experience for Pop!_OS in its own way, and can be adapted by anyone that wants to build their own unique user experience for Linux.

Thank you for your patience while we built COSMIC. We know it’s been a long ride, and appreciate you taking it with us. Pop!_OS 24.04 LTS doesn’t feel like moving forward three years. It feels like leaping forward a decade. This day marks the foundation for our next twenty years and the first of what will be many rapid innovations.

And thank you to System76 customers. COSMIC is entirely funded by System76 hardware sales. Not only are you getting the best Linux hardware on the planet, you're investing in the future of the Linux desktop that we all love.

I hope you love what we’ve built for you. Now go out there and create. Push the limits, make incredible things, and have fun doing it!

Carl Richell
Founder and CEO
System76

Download Pop!_OS 24.04 LTS

Download Pop!_OS 24.04 LTS at https://system76.com/pop/

Pop!_OS 24.04 LTS Highlights

  • New COSMIC Desktop Environment Experience
  • New Pop!_OS 24.04 LTS for ARM computers
    ◦ Officially supported on the System76 Thelio Astra desktop
    ◦ Community support for non-System76 hardware enabled by Tow-Boot
  • New hybrid graphics support for longer battery life
    ◦ No need to change modes
    ◦ Apps that request the discrete GPU will automatically run on the correct GPU
    ◦ Manually run an app on your preferred GPU by right-clicking on the app icon
  • Easy installation with full disk encryption
  • Refresh install feature by holding Space at boot or from the ISO
    ◦ Reinstall the OS anytime while keeping files, settings, and Flatpak user applications
    ◦ The Refresh feature will also arrive in COSMIC Settings after release
  • Broad hardware support

COSMIC Desktop Environment Highlights

  • Intuitive window tiling that can be used with the mouse or keyboard
    ◦ Activate tiling with a simple toggle in the panel
    ◦ Tiling per workspace and per display
    ◦ Easy to learn keyboard shortcuts
    ◦ Rearrange windows by dragging them with the mouse. Visual hints show where the window will land.
  • Featureful Workspaces
    ◦ Horizontal or vertical workspaces
    ◦ Workspace per display or spanned across displays
    ◦ Drag workspaces to re-arrange them or move an entire workspace to a different display
    ◦ Pin workspaces so they’re never removed
    ◦ Workspace settings are persistent. A tiled and pinned workspace will be tiled and pinned after reboot.
    ◦ Add the “Numbered Workspaces” applet to your Panel or Dock to always see the workspace number you’re on
  • Smooth multi-display experience
    ◦ Effortlessly mix and match hi-dpi and standard resolution displays
    ◦ Displays are automatically scaled based on pixel density and display scaling can be fine-tuned in Settings
    ◦ Display settings are persistent. Plug in the same display and its settings will return.
    ◦ Unplug a display and the windows on that display will move to a new workspace on a remaining display
  • Customization galore
    ◦ Theme your desktop from Settings with easy color pickers
    ◦ Setup your personalized layout
    ••• Panel + Dock or single Panel
    ••• Put the Panel and Dock on any screen edge
    ◦ Add and arrange features (Applets) on the Panel or Dock from Settings
  • Fast desktop navigation and easy keyboard shortcuts
    ◦ Press the Super (or the Windows key) to activate the Launcher
    ••• Type the app you want and press enter
    ••• Type the name of an open app then enter to switch to it
    ••• Type ? to learn more features like web and GitHub search, calculator, and running commands directly from the launcher
    ◦ Use the same keyboard shortcuts to focus or arrange windows across workspaces and displays
    • Super+Arrows to change the focused window. Super+Shift+Arrows to move windows.
  • Stacks, snapping, and sticky windows
    ◦ Stack windows to combine them into tab groups like a browser
    ••• Right click on the header and choose Create Window Stack. Then drag another window to the stack.
    ••• When tiling windows, simply drag the window on top of another to create a stack.
    ◦ When using floating (non-tiled) windows, drag them to the display edges to snap them into a quarter or half tile
    ◦ Make a window stay on top of other windows and follow you around to other workspaces or displays by right-clicking the header and choosing  “Sticky window”
  • Faster, more responsive applications
    ◦ COSMIC Files
    ◦ COSMIC Store
    ◦ COSMIC Terminal
    ◦ COSMIC Text Editor
    ◦ COSMIC Media Player
    ◦ COSMIC Screenshot
    ◦ Install more apps Made for COSMIC from COSMIC Store
  • Written in the Rust programming language
    ◦ For memory safety and high performance


Download Pop!_OS 24.04 LTS

Download Pop!_OS 24.04 LTS at https://system76.com/pop/

Upgrading from Pop!_OS 22.04 LTS to Pop!_OS 24.04 LTS

Pop!_OS 22.04 LTS users will receive an upgrade notification in the OS starting January 2026. If you wish to upgrade to Pop!_OS 24.04 LTS before then, after backing up your files, open Terminal and run

pop-upgrade release upgrade -f

Going Through Snowden Documents, Part 1

Hacker News
libroot.org
2025-12-11 18:52:08
Comments...
Original Article

We are building a comprehensive archive and analysis project examining published documents leaked by Edward Snowden. Our methodology involves systematically reviewing each available document with particular attention to small details and information that has received little or no public attention since the initial 2013 disclosures. Throughout this process, we will publish posts highlighting interesting previously unreported findings. The main project will hopefully be complete and made public in mid-to-late 2026.

This is Part 1 of our "Going Through Snowden Documents" series.

Document: CNE Analysis in XKEYSCORE

Classification: TOP SECRET//COMINT//REL TO USA, AUS, CAN, GBR, NZL

Date: October 15, 2009

Published by: The Intercept ( July 1 and July 2, 2015 )

Authors: Morgan Marquis-Boire, Glenn Greenwald, and Micah Lee

While The Intercept published this document, the accompanying articles focus on NSA's XKEYSCORE system broadly and does not analyze this specific document. The document appears only in the "Documents published with this article" sections without dedicated coverage. Academic searches, news archives, and general web searches reveal virtually no subsequent analysis or citation of this document. This pattern of important documents published but never publicly analyzed is unfortunately very common in the published Snowden documents.

Table of Contents

This October 2009 33-page document, in a slideshow format, is an internal NSA training presentation demonstrating how analysts use XKEYSCORE to search and analyze data collected through Computer Network Exploitation (CNE), the NSA's term for active hacking operations. While framed as instructional examples showing various search capabilities, the screenshots display real surveillance operations with identifiable targets and captured data.

The screenshots in the document are such poor quality that, at times, reading the text is very difficult. However, by examining the context and surrounding text (or surrounding pages), the text can be inferred with a very high probability. This has certainly contributed to why many documents have not been studied more thoroughly in public, as many are similarly low quality with scrambled text.

CNE operation against Chinese defense contractor Norinco

One of the most significant previously unreported findings in this document is evidence of NSA surveillance targeting Norinco, China North Industries Corporation, one of the world's largest state-owned defense contractors. Norinco ranks among the world's top 100 defense companies by revenue and serves as a major exporter of military equipment to Pakistan, Iran, Venezuela, Zimbabwe, and dozens of other countries, many of which have contentious relationships with the United States.

On page 18, a screenshot from XKEYSCORE's Metaviewer interface displays a "Histogram of @Domain" view with a bar graph showing email volume across 10 domain names followed by a data table with formatted surveillance results. The query appears to be a converged search combining multiple distinct surveillance targets: Mexican federal agencies (ssp.gob.mx at 452 emails, pfp.local at 158 emails), Norinco-related domains (mail.norinco.cn, businessmonitor.com, bmi.msgfocus.com, zhenhuaoil.com, and lms-ms-daemon, each showing 3 emails), and two additional targets (steels-net.cu and inwind.it, each with 1 email). This convergence of seemingly unrelated targets in a single query demonstrates XKEYSCORE's ability to simultaneously analyze multiple surveillance operations.

The first five entries in the results table contain:

Email User Name | Datetime            | Highlights | @Domain             | Subject                               | Chain
[REDACTED]      | 2009-10-10 05:15:10 | CNE        | mail.norinco.cn     | 28-10 senior contacts in India for zh | 0kqe00g01mrdii@mail.norinco.cn&kate.strut
[REDACTED]      | 2009-10-10 05:15:10 | CNE        | businessmonitor.com | 28-10 senior contacts in India for zh | 0kqe00g01mrdii@mail.norinco.cn&kate.strut
[REDACTED]      | 2009-10-10 05:15:10 | CNE        | bmi.msgfocus.com    | 28-10 senior contacts in India for zh | 0kqe00g01mrdii@mail.norinco.cn&kate.strut
[REDACTED]      | 2009-10-10 05:15:10 | CNE        | zhenhuaoil.com      | 28-10 senior contacts in India for zh | 0kqe00g01mrdii@mail.norinco.cn&kate.strut
[REDACTED]      | 2009-10-10 05:15:10 | CNE        | lms-ms-daemon       | 28-10 senior contacts in India for zh | 0kqe00g01mrdii@mail.norinco.cn&kate.strut

Screenshot from the page 18 showing emails related to Norinco

All entries are marked with the "CNE" highlight tag, indicating the data came from CNE operations, active hacking intrusions rather than passive network intercepts. Critically, all five entries share an identical "Chain" value indicating this is a single email captured at multiple points as it traversed Norinco's email infrastructure. The multiple domains – businessmonitor.com (newsletter sender), bmi.msgfocus.com (newsletter delivery service), mail.norinco.cn (Norinco's mail server), zhenhuaoil.com (Norinco's subsidiary), and lms-ms-daemon (the default domain name for Sun Java Messaging Server commonly used in enterprise email infrastructure) – represent the newsletter email's routing path through Norinco's network. This indicates that NSA achieved deep network penetration with visibility across multiple servers and routing points within Norinco's corporate email infrastructure, not just a single interception point. The compromise extended to Zhenhua Oil (Norinco's oil exploration subsidiary), indicating enterprise-wide access.

Redaction failure exposing NSA agent username

Most XKEYSCORE search interfaces display a welcoming message showing the analyst's internal NSA username. In the document all usernames have been redacted from the screenshots except one left unredacted by mistake.

Screenshot showing redacted NSA username

On page 9, the username " cryerni " is visible in the screenshot.

Screenshot showing NSA agent username

This username most likely belongs to the NSA analyst who created the presentation. The seven-character length matches the redacted name on the first page, based on the surrounding unredacted font. The length of seven characters also matches to other NSA agents' usernames in other documents (more on that in upcoming parts).

CNE operation against Mexican federal law enforcement

On the page 18, the XKEYSCORE Metaviewer displays email extraction results showing surveillance of Mexican federal law enforcement from domains ssp.gob.mx (Secretaría de Seguridad Pública) and pfp.local (Policía Federal Preventiva). Email subjects include:

101009 EII LA PAZ, BAJA CALIFORNIA
101009 EII MEXICALI, BAJA CALIFORNIA
101009 EII CIUDAD JUÁREZ, CHIHUAHUA

Screenshot from the page 18 showing emails related to ssp.gob.mx and pfp.local

"EII" likely stands for "Estructura de Información de Inteligencia" or similar internal reporting format. The dates (101009 = October 10, 2009) and locations indicate daily intelligence reports from Mexican federal police units in Baja California's border region and Ciudad Juárez, one of Mexico's most violent cities during the peak of cartel warfare under President Felipe Calderón's military-led offensive against drug cartels.

NSA surveillance of these communications likely supported US counter-narcotics operations, identified compromised Mexican officials, and monitored cartel structures and government response capabilities. However, this represents surveillance of a nominal ally's law enforcement agencies without apparent Mexican government knowledge or consent. All entries were marked "CNE," again indicating active computer compromise rather than passive intercept.

CNE operation against Iran's customs and rails

Another interesting finding appears on page 17, showing document metadata extraction results with the name "Iran OP Customs and Rail Extracted Docs". The results table displays documents captured from a file path containing "lap top drive" and "Private Inbox", with all entries marked "CNE" in the Highlights column, indicating NSA compromised a portable computer likely belonging to someone working in Iranian transportation or customs infrastructure. The implant performed a complete directory walk and extracted Word documents from the user's private folders.

Screenshot from the page 17 titled 'XK Metaviewer Iran OP Customs and Rail Extracted Docs' Screenshot from the page 17 table showing the surveillance results

New surveillance program codenames

Several program codenames mentioned in this document don't appear in any other published Snowden documents or in previous reporting. No mention either in websites documenting all the codenames found in Snowden documents and in other NSA/GCHQ related articles and documents.

TURBOCHASER - The document describes TURBOCHASER as an NSA database for "profiles" and for "future tasking", appearing alongside MARINA (the well-documented NSA metadata repository). The name suggests rapid-cycling or high-speed processing ("turbo") of pursuit targets ("chaser"). Based on context, TURBOCHASER likely handled specific metadata types or geographic regions that MARINA didn't cover. The document's brief mention provides no additional details.

TUCKER - References in the document suggest TUCKER is an exploitation framework comparable to UNITEDRAKE (the well-documented full-featured Windows implant). The document lists TUCKER's sub-projects including OLYMPUS, EXPANDINGPULLY, and UNIX, indicating TUCKER was a platform hosting multiple specialized payloads and/or (post-)exploitation tools.

SHADOWQUEST , WAYTIDE , GREENCHAOS - These appear as collection source identifiers in the document. The document shows them as input sources feeding CNE data into XKEYSCORE. Notably, FOXACID, the well-documented NSA exploit server system used to deliver malware to targets, also appears in this context with the suffix FOXACID6654, suggesting it functioned not just as an exploitation delivery mechanism but also as a collection source identifier once targets were compromised. This reveals FOXACID's dual role: initial compromise vector and ongoing data collection infrastructure.

The input sources shown include:

  • FOXACID6654 - collecting wireless survey data
  • SHADOWQUEST35 - collecting wireless survey data
  • WAYTIDE1173 - collecting wireless intelligence
  • GREENCHAOS15 - source of the Chinese keylogger data

The numeric suffixes (6654, 35, 1173, 15) likely designates a specific server or operational instance, possibly corresponding to geographic regions, operational theaters, or specific TAO teams.

Other

Finally, the document showcases several detailed cases of NSA's CNE capabilities, confirming and adding specific context to techniques that have been reported on more generally since 2013.

FOGGYBOTTOM: HTTP activity surveillance

Screenshot showing the captured HTTP activity

Pages 19-20 showcase FOGGYBOTTOM for monitoring HTTP activity captured through CNE operations. FOGGYBOTTOM is a computer implant plug-in that records logs of internet browsing histories and collects login details and passwords used to access websites and email accounts. These pages show detailed browser surveillance of a target identified by case notation YM.VALAGWAADTC (Yemen) on October 14, 2009. The system captured:

  • Multiple Facebook login attempts (login.facebook.com with "login_attempt=1" POST requests)
  • Arabic-language Facebook browsing (ar-ar.facebook.com)
  • Saudi Arabian Google searches (www.google.com.sa with "hl=ar" indicating Arabic language)
  • Yemeni news sites (www.14october.com, www.26sep.net, www.althawranews.net)
  • Arabic sports forums (forum.kooora.com - a popular Middle Eastern sports discussion site)

The surveillance captured not just URLs but complete HTTP request details including POST data and URL parameters. The "dnt_payload/browser" formatter shows the target's local time, timezone offset, and HTTP POST form data. Since this data comes from a CNE implant running on the compromised computer itself – not passive network interception – it captures web traffic before encryption occurs. The implant sees the browsing data whether the connection uses HTTP or HTTPS, providing complete visibility into all browsing activity including encrypted sessions that would be opaque to network-level surveillance.

Windows registry surveillance

Screenshot showing the Windows registry entries

Page 26 demonstrates XKEYSCORE's capability to search and analyze Windows registry data extracted from compromised machines. The screenshots show registry queries returning UserAssist keys; Windows registry entries that record every program a user has executed, how many times, and when they last ran it. This data is maintained by Windows for user interface optimization but becomes a detailed forensic record when captured by NSA implants.

Multi-lingual keylogger capabilities

Screenshot of keystroke data captured from a target in China using QQ Messenger and Microsoft Excel

Pages 24-25 demonstrate XKEYSCORE's keylogger capabilities with actual captured keystrokes from a compromised computer identified as GREENCHAOS15 in China. The target was using QQ.exe (China's largest instant messaging platform owned by Tencent), Microsoft Excel, and Microsoft Access. The keylogger captured complete Chinese character input, control key sequences, hexadecimal codes for special characters, window titles showing conversation participants, and even deleted text and editing actions. In Excel, the system recorded every keystroke including numerical entries, navigation inputs (Delete, NumPad entries), and cell references (D4, H2, D53, etc.), showing the target working on a spreadsheet titled "3C证书导入工作周报0928-1001.xls" (3C Certificate Import Work Weekly Report 09/28-10/01). The target appeared to be an office worker handling administrative tasks related to China's 3C certification system (China Compulsory Certificate for product safety/quality). This demonstrates NSA's ability to capture multi-lingual keystrokes across all applications with complete context preservation.

"vpn in docs"

XKEYSCORE results flagging the keywords 'vpn' and 'pptp' found within captured documents and emails

The document also demonstrates how XKEYSCORE uses a generic "tech strings" search to automatically identify and flag arbitrary keywords that an analyst queries. This feature appears to function as a catchall system for finding terms of interest in data streams that lack a more specific parser. The examples show XKEYSCORE tagging the strings "vpn" and "pptp" inside a wide variety of captured data. This includes the content of emails (email_body), the body of local documents (document_body with file paths like C:\TNI-095CC.DOC), and other raw data payloads exfiltrated from implants (tech_body). As nearly all entries are highlighted with "CNE," this reveals that NSA implants actively scan a target's private files and communications for these keywords. The resulting intelligence allows analysts to discover a target's security posture, identify potential vulnerabilities, and find information such as credentials or server details that can be leveraged to gain access to privileged systems or map internal networks.

This document is a good example of the significant intelligence hiding in plain sight within the published Snowden documents. A detailed review can reveal significant, previously unreported intelligence operations, such as the CNE op against a major Chinese defense contractor. These findings underscore the importance of a systematic review of the documents. Also, it's important to acknowledge the inherent limitations of analyzing any single document in isolation like we did in this post. A single document analysis offers only a snapshot with limited context.

Rust 1.92.0 released

Linux Weekly News
lwn.net
2025-12-11 18:40:12
Version 1.92.0 of Rust has been released. This release includes a number of stabilized APIs, emits unwind tables by default on Linux, validates input to #[macro_export], and much more. See the separate release notes for Rust, Cargo, and Clippy. ...
Original Article

[Posted December 11, 2025 by jzb]

Version 1.92.0 of Rust has been released. This release includes a number of stabilized APIs, emits unwind tables by default on Linux, validates input to #[macro_export] , and much more. See the separate release notes for Rust , Cargo , and Clippy .



Trump Administration Diverted $2 Billion in Pentagon Funds to Target Immigrants, Lawmakers Say

Intercept
theintercept.com
2025-12-11 18:35:42
The Trump administration is funding its anti-immigrant campaign with money set aside for defense, Democratic lawmakers wrote. The post Trump Administration Diverted $2 Billion in Pentagon Funds to Target Immigrants, Lawmakers Say appeared first on The Intercept....
Original Article

The Trump Administration has siphoned off at least $2 billion from the Pentagon budget for anti-immigration measures, with plans to more than double that number in the coming fiscal year, according to a report released Thursday by Democratic lawmakers.

The report, titled “ Draining Defense ,” took aim at the Trump administration for what it described as prioritizing hard-line border initiatives and political stunts at the expense of the military’s ability to protect the nation and respond to emergencies.

“It’s an insult to our service members that Pete Hegseth and Kristi Noem are using the defense budget as a slush fund for political stunts. Stripping military resources to promote a wasteful political agenda doesn’t make our military stronger or Americans safer,” Sen. Elizabeth Warren, D-Mass., one of the lawmakers who prepared the report, told The Intercept. “Congress needs to step in and hold the Trump Administration accountable for mishandling billions of taxpayer dollars.”

The report noted that the Pentagon’s requested budget for 2026 indicates that the Defense Department plans to spend at least $5 billion for operations on the southern border alone.

President Donald Trump has made a crackdown on immigration and closed borders the key policy of his second term, and has argued that decreasing immigration and deporting immigrants is a cornerstone of sovereignty and safety. But the lawmakers argued that the level of commitment of Pentagon funds and troops on immigration matters has passed any reasonable standard, hampering the overall readiness of the nation’s armed forces and contributing to wasteful spending in lieu of more efficient allocation of resources by civilian agencies.

“When the military is tasked with immigration enforcement — a role that is not consistent with DoD’s mission, and that servicemembers have neither signed up nor been trained for — those operations often cost several times more than when the same function is performed by civilian authorities,” the lawmakers wrote.

The report found that the Pentagon had allocated at least $1.3 billion for resources and troop deployment to the border; at least $420.9 million for the detention of immigrants at military installations at home and abroad; at least $258 million for the deployment of troops American cities like Los Angeles, Portland, and Chicago; and at least $40.3 million for military deportation flights.

“As of July 2025, there were roughly 8,500 troops deployed to the southern border, with additional combat units in the process of relieving the troops who were deployed to the border earlier in the year,” the lawmakers wrote. “This deployment has meant making combat-certified units no longer available for their normal functions because they are assisting DHS with immigration enforcement — raising serious concerns about the implications for military readiness.”

The report also singled out the cost of Trump’s deployments to U.S. cities over the past year and cited reporting by The Intercept on the steep cost of those deployments.

The lawmakers also raised concerns that, in addition to the financial costs, the Pentagon’s focus on anti-immigration policies has resulted in military service members “being pulled from their homes, families, and civilian jobs for indefinite periods of time to support legally questionable political stunts.”

They criticized the administration’s failure to adequately inform Congress and the public about the diversion of Pentagon funds. “The Trump administration’s secrecy leaves many questions unanswered,” they wrote. “The administration has failed to provide clarity on basic questions about DoD’s role in supporting DHS.”

The White House responded that “spending allocated money on one mission does not mean other missions become depleted,” and said the use of Pentagon funds on immigration matters should be blamed on political adversaries.

“Operations with the Department of Homeland Security wouldn’t be necessary if Joe Biden didn’t turn the Southern Border into a national security threat, but this administration is proud to fix the problem Democrats started,” said Pentagon press secretary Kingsley Wilson in an emailed statement.

Helldivers 2 - 85% reduction in install size with minimal performance impact

Lobsters
store.steampowered.com
2025-12-11 18:34:59
Comments...
Original Article

© Valve Corporation. All rights reserved. All trademarks are property of their respective owners in the US and other countries. Privacy Policy | Legal | Accessibility | Steam Subscriber Agreement | Refunds

Dirk Eddelbuettel: #056: Running r-ci with R-devel

PlanetDebian
dirk.eddelbuettel.com
2025-12-11 18:29:00
Welcome to post 56 in the R4 series. The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The in...
Original Article

#056: Running r-ci with R-devel

Welcome to post 56 in the R 4 series.

The recent post #54 reviewed a number of earlier posts on r-ci , our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.

This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo ), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml :

    strategy:
      matrix:
        include:
          - { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
          - { name: r-devel,   os: ubuntu-latest, container: rocker/drd }
          - { name: macos,     os: macos-latest }
          - { name: ubuntu,    os: ubuntu-latest }

    runs-on: ${{ matrix.os }}
    container: ${{ matrix.container }}

This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker . The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD ) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff’s mmap repo .

It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.

Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub .

/code/r4 | permanent link

Over 10,000 Docker Hub images found leaking credentials, auth keys

Lobsters
www.bleepingcomputer.com
2025-12-11 18:18:44
Comments...
Original Article

Over 10,000 Docker Hub images found leaking credentials, auth keys

More than 10,000 Docker Hub container images expose data that should be protected, including live credentials to production systems, CI/CD databases, or LLM model keys.

The secrets impact a little over 100 organizations, among them are a Fortune 500 company and a major national bank.

Docker Hub is the largest container registry where developers upload, host, share, and distribute ready-to-use Docker images that contain everything necessary to run an application.

Developers typically use Docker images to streamline the entire software development and deployment lifecycle. However, as past studies have shown , carelessness in creating these images can result in exposing secrets that remain valid for extended periods.

After scanning container images uploaded to Docker Hub in November, security researchers at threat intelligence company Flare found that 10,456 of them exposed one or more keys.

The most frequent secrets were access tokens for various AI models (OpenAI, HuggingFace, Anthropic, Gemini, Groq). In total, the researchers found 4,000 such keys.

When examining the scanned images, the researchers discovered that 42% of them exposed at least five sensitive values.

"These multi-secret exposures represent critical risks, as they often provide full access to cloud environments, Git repositories, CI/CD systems, payment integrations, and other core infrastructure components," Flare notes in a report today.

Size of secret exposure
Size of secret exposure
Source: Flare

Analyzing 205 namespaces enabled the researchers to identify a total of 101 companies, mostly small and medium-sized businesses, with a few large enterprises being present in the dataset.

Based on the analysis, most of the organizations with exposed secrets are in the software development sector, followed by entities in the market and industrial, and AI and intelligent systems.

More than 10 finance and banking companies had their sensitive data exposed.

Types of firms that exposed secrets on Docker Hub in November
Types of firms that exposed secrets on Docker Hub in November
Source: Flare

According to the researchers, one of the most frequent errors observed was the use of .ENV files that developers use to store database credentials, cloud access keys, tokens, and various authentication data for a project.

Additionally, they found hardcoded API tokens for AI services being hardcoded in Python application files, config.json files, YAML configs, GitHub tokens, and credentials for multiple internal environments.

Some of the sensitive data was present in the manifest of Docker images, a file that provides details about the image.

Many of the leaks appear to originate from the so-called 'shadow IT' accounts, which are Docker Hub accounts that fall outside of the stricter corporate monitoring mechanisms, such as those for personal use or belonging to contractors.

Flare notes that roughly 25% of developers who accidentally exposed secrets on Docker Hub realized the mistake and removed the leaked secret from the container or manifest file within 48 hours.

However, in 75% of these cases, the leaked key was not revoked, meaning that anyone who stole it during the exposure period could still use it later to mount attacks.

Exposed secrets exploitation diagram
Exposed secrets exploitation diagram
Source: Flare

Flare suggests that developers avoid storing secrets in container images, stop using static, long-lived credentials, and centralize their secrets management using a dedicated vault or secrets manager.

Organizations should implement active scanning across the entire software development life cycle and revoke exposed secrets and invalidate old sessions immediately.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Rivian Unveils Custom Silicon, R2 Lidar Roadmap, and Universal Hands Free

Hacker News
riviantrackr.com
2025-12-11 18:17:19
Comments...
Original Article

RJ opened the first ever Autonomy and AI Day explaining why Rivian believes it is positioned to lead in this next phase of the industry. The company is leaning hard into compute, custom hardware, large scale AI systems, and a shared data foundation that touches every part of the ownership experience.

Let’s break it all down.

Meet the Rivian Autonomy Processor

One of the biggest announcements was RAP1, Rivian’s first in house processor built on a 5nm multi chip module. It delivers 1600 sparse INT8 TOPS and can push 5 billion pixels per second inside the new Gen 3 Autonomy Computer. Rivian even built its own AI compiler and platform software to support it. This shows Rivian is no longer just integrating off the shelf chips, it is now designing silicon specifically for its autonomy roadmap.

Rivian Autonomy Processor
Click to enlarge
Rivian Autonomy Processor
Click to enlarge

Autonomy Computer and LiDAR on R2

The ACM3 (Autonomy Compute Module 3) autonomy computer will debut on R2 starting at the end of 2026, but Rivian made it clear that R2 will launch initially without LiDAR. What Rivian confirmed today is that LiDAR will be added later in the program. This lines up with what we explored back in May when we spotted early signs that Rivian was evaluating LiDAR as a redundancy and ground truth layer for future autonomy. Rivian has now officially validated that LiDAR is coming to R2 down the road, where it will join cameras and radar to create a richer, more resilient perception stack.

Rivian R2 with LiDAR
Click to enlarge
Rivian R2 with LiDAR
Click to enlarge

Large Driving Model and Rivian’s Data Loop

Rivian explained how its autonomy stack is powered by a self improving data loop feeding the company’s Large Driving Model, which is trained similarly to an LLM. Reinforcement learning distills high quality driving behavior into efficient onboard models. Every release improves the system, and Rivian laid out a trajectory that moves toward point to point, eyes off and eventually personal Level 4.

Universal Hands Free Coming to Gen 2

Rivian confirmed that a major software update will bring Universal Hands Free to Gen 2 R1T and R1S. This hands free experience will cover over 3.5 million miles of roads across the US and Canada as long as there are clearly painted lane lines. It is a huge expansion of the assisted driving envelope for current owners.

Autonomy+ Sub Launching in 2026

Rivian also announced Autonomy+ , an autonomy tier with continuously expanding features launching early 2026.

Pricing is $2,500 one time or $49.99 per month.

Rivian Autonomy
Click to enlarge

Rivian Unified Intelligence

Rivian is reorganizing its entire platform around Rivian Unified Intelligence, a data foundation that ties together telemetry, cloud models, service systems and customer facing features. It is the backbone for predictive maintenance, smarter diagnostics and upcoming AI driven tools.

Rivian Assistant Coming in 2026

Rivian also officially unveiled its new Rivian Assistant, a next generation voice experience arriving early 2026 on Gen 1 and Gen 2 R1 vehicles. The assistant uses a blend of edge models and in vehicle intelligence to understand your schedule, recognize context, and handle everyday requests.

On R2, it will even run fully offline thanks to a more powerful infotainment computer, reducing latency and keeping more of the experience on device.

Click to enlarge

AI Powered Service and Diagnostics

Rivian is embedding AI into the service workflow. Technicians will have access to an AI driven expert system that analyzes telemetry and vehicle history to pinpoint issues faster and more accurately. These same tools will eventually power the mobile app as well, making self service diagnostics significantly smarter.

Rivian Service Center
Click to enlarge

GPT-5.2

Hacker News
openai.com
2025-12-11 18:12:43
Comments...

‘Architects of AI’ Wins Time Person of the Year, Sends Gambling Markets Into a Meltdown

403 Media
www.404media.co
2025-12-11 18:10:47
People who bet on "AI" did not win and are incredibly mad....
Original Article

The degenerate gamblers of Polymarket and Kalshi who bet that “AI” would win the Time Person of the Year are upset because the magazine has named the “Architects of AI” the person of the year. The people who make AI tools and AI infrastructure are, notably, not “AI” themselves, and thus both Kalshi and Polymarket have decided that people who bet “AI” do not win the bet. On Polymarket alone, people spent more than $6 million betting on AI gracing the cover of Time.

As writer Parker Molloy pointed out , people who bet on AI are pissed. “ITS THE ARCHITECTS OF AI THISNIS [sic] LITERALLY THE BET FUCK KALSHI,” one Kalshi better said.

“This pretty clearly should’ve resolved to yes. If you bought AI, reach out to Kalshi support because ‘AI’ is literally on the cover and in the title ‘Architects of AI.’ They’re not going to change anything unless they hear from people,” said another.

“ThE aRcHiTeCtS oF AI fuck you pay me,” said a third.

“Another misleading bet by Kalshi,” said another gambler. “Polymarket had fair rules and Kalshi did not. They need to fix this.”

But bag holders on Polymarket are also pissed. “This is a scam. It should be resolved to a cancellation and a full refund to everyone,” said a gambler who’d put money down on Jensen Huang and lost. Notably, on Kalshi, anyone who bet on any of the “Architects of AI,” won the bet (meaning Sam Altman, Elon Musk, Jensen Huang, Dario Amodei, Mark Zuckerberg, Lisa Su, and Demis Hassabis), while anyone who bet their products—“ChatGPT” and “OpenAI” did not win. On Polymarket, the rules were even more strict, i.e. people who bet “Jensen Huang” lost but people who bet “Other” won.

“FUCK YOU FUCKING FUCK Shayne Coplan [CEO of Polymarket],” said someone who lost about $50 betting on AI to make the cover.

Polymarket made its reasoning clear in a note of “additional context” on the market.

“This market is about the person/thing named as TIME's Person of the Year for 2025, not what is depicted on the cover. Per the rules, “If the Person of the Year is ‘Donald Trump and the MAGA movement,’ this would qualify to resolve this market to ‘Trump.’ However if the Person of the Year is ‘The MAGA movement,’ this would not qualify to resolve this market to ‘Trump’ regardless of whether Trump is depicted on the cover,” it said .

“Accordingly, a Time cover which lists ‘Architects of AI’ as the person of the year will not qualify for ‘AI’ even if the letters ‘AI’ are depicted on the cover, as AI itself is not specifically named.”

It should be noted how incredibly stupid all of this is, which is perhaps appropriate for the year 2025, in which most of the economy consists of reckless gambling on AI. People spent more than $55 million betting on the Time Person of the Year on Polymarket, and more than $19 million betting on the Time Person of the Year on Kalshi. It also presents one of the many downsides of spending money to bet on random things that happen in the world. One of the most common and dumbest things that people continue to do to this day despite much urging otherwise is anthropomorphize AI, which is distinctly not a person and is not sentient.

Time almost always actually picks a “person” for its Person of the Year cover, but it does sometimes get conceptual with it, at times selecting groups of people (“The Silence Breakers” of the #MeToo movement, the “Whistleblowers,” the “Good Samaritans,” “You,” and the “Ebola Fighters,” for example). In 1982 it selected “The Computer” as its “Machine of the Year,” and in 1988 it selected “The Endangered Earth” as “Planet of the Year.”

Polymarket’s users have been upset several times over the resolution of bets in the past few weeks and their concerns highlight how easy it is to manipulate the system. In November, an unauthorized edit of a live map of the Ukraine War allowed gamblers to cash in on a battle that hadn’t happened. Earlier this month, a trader made $1 million in 24 hours betting on the results of Google’s 2025 Year In Search Rankings and other users accused him of having inside knowledge of the process. Over the summer, Polymarket fought a war over whether or not President Zelenskyy had worn a suit . Surely all of this will continue to go well and be totally normal moving forward, especially as these prediction markets begin to integrate themselves with places such as CNN .

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

How Many Members Does Antifa Have? Where Is Its Headquarters? The FBI Has No Answers.

Intercept
theintercept.com
2025-12-11 18:09:46
Despite saying that antifa is the biggest U.S. domestic threat, the FBI couldn’t explain how the movement is a “terror organization” — or an organization at all. The post How Many Members Does Antifa Have? Where Is Its Headquarters? The FBI Has No Answers. appeared first on The Intercept....
Original Article

A top FBI official toed the White House line about antifa as a major domestic terror threat at a House hearing on Thursday — but he struggled to answer questions about the leaderless movement.

Pressed repeatedly by a top Democrat on the House Homeland Security Committee about antifa’s size and location, the operations director of the FBI’s national security division didn’t have answers.

At one point, the FBI’s Michael Glasheen fumbled with his hands as he tried to find an answer for the question from Rep. Bennie Thompson, D-Miss.

“Well, the investigations are active,” Glasheen said.

“You said antifa is a terrorist organization. Tell us, as a committee, how did you come to that?”

Glasheen’s comments came three months after President Donald Trump proclaimed that antifa is a “major terror organization,” even though it the broad political movement does not have a hierarchy or leadership .

Trump followed his designation with a presidential memo on September 25 directing the FBI-led Joint Terrorism Task Forces to investigate and prosecute antifascists and other adherents of “anti-Americanism.”

The formless nature of the antifascist movement, however, appears to have flummoxed the FBI as it attempts to carry out Trump’s orders.

Glasheen called antifa “our primary concern right now” and called it “the most immediate, violent threat” from domestic terrorists. That led Thompson to ask him where antifa is located and how many members it has.

“We are building out the infrastructure right now,” Glasheen said.

“So what does that mean?” Thompson shot back. “I’m just — we’re trying to get the information. You said antifa is a terrorist organization. Tell us, as a committee, how did you come to that? Where do they exist? How many members do they have in the United States as of right now?”

Glasheen visibly struggled to answer the question before saying that the FBI’s investigations were “active.”

“Well, that’s very fluid. It’s ongoing for us to understand that. The same, no different than Al Qaeda or ISIS,” he said at another point.

“ No different than Al Qaeda or ISIS.”

Glasheen is a veteran FBI official who was appointed to serve as the Terrorist Screening Center director under the Biden administration in 2023 and selected by current FBI Director Kash Patel as one of the agency’s five operations directors earlier this year.

The FBI’s shift to focusing on alleged left-wing violence comes despite researchers at the Center for Strategic and International Studies finding that despite an increase this year, it remains “much lower than historical levels of violence carried out by right-wing and jihadist attackers.”

Trump has long obsessed over the “threat” that antifa poses to the U.S . His fixation appears to have been supercharged by the September 10 slaying of right-wing activist Charlie Kirk in Utah, allegedly by a shooter who engraved one unused bullet with the words “Hey fascist! catch!”

That helped spur Trump administration officials to launch an extensive search for links between the alleged killer, Tyler Robinson , and domestic or foreign groups that so far has produced no arrests.

Programmers and software developers lost the plot on naming their tools

Lobsters
larr.net
2025-12-11 18:05:55
Comments...
Original Article

This section was labeled under, or is related to Programming

In Dec 2022 I watched Richard Stallman’s talk on the EmacsConf, it was titled “What I’d like to see in Emacs”. One of the interesting points Mr. Stallman pointed out in this talk was “memorable names”, “I think every package that you […] should have a name that helps you remember what job it does. […] We’ve had a tendency to give packages names for the sake of pure wordplay or lack of obvious meaning”. That Stallman felt compelled to make this point in 2022 tells you everything about how far we’ve fallen, even within the Emacs ecosystem (known for its descriptive naming conventions, dired for directory editor, eshell for Emacs shell).

There’s an odd tendency in modern software development; we’ve collectively decided that naming things after random nouns, mythological creatures, or random favorite fictional characters is somehow acceptable professional practice. This would be career suicide in virtually any other technical field.

I remembered Stallman’s comment lately when I had some difficulties following my friend who was describing some situation in their infrastructure. She was saying something like that: “We’re using Viper for configuration management, which feeds into Cobra for the CLI, and then Melody handles our WebSocket connections, Casbin manages permissions, all through Asynq for our job queue.”, perhaps only the last software in this statement was saying something about what the package/software actually does, I spent couple of moments trying to make sense of the names she mentioned, googled some of them, and really while I’m doing that I remembered that I never had to do this when interacting with any other fields: the Golden Gate Bridge tells you it spans the Golden Gate strait. The Hoover Dam is a dam, named after the president who commissioned it, not “Project Thunderfall” or “AquaHold.” Steel I-beams are called I-beams because they’re shaped like the letter I. Even when engineers get creative, there’s logic: a butterfly valve actually looks like butterfly wings. You can tell how the name relates to what it actually defines, and how it can be memorable. If you wrote 100 CLIs, you will never counter with a cobra .

Same thing applies to other fields like chemical engineering, where people there maintain even stricter discipline. IUPAC nomenclature ensures that 2,2,4-trimethylpentane describes exactly one molecule. No chemist wakes up and decides to call it “Steve” because Steve is a funny name and they think it’ll make their paper more approachable.

It was not always like that (I believe)

I read a lot into software history, and I can’t really say that there was an era of fantastic naming (even very experienced engineers made some very silly naming ) but at least some current was trying to make some sense in the 80s; grep (global regular expression print), awk (Aho, Weinberger, Kernighan; the creators’ initials), sed (stream editor), cat (concatenate), diff (difference). Even when abbreviated, these names were either functional descriptions or systematic derivations. Nobody named the copy command “Sparkle” or the move command “Whisper.”

Early programming languages followed similar logic: FORTRAN (Formula Translation), COBOL (Common Business-Oriented Language), BASIC (Beginner’s All-purpose Symbolic Instruction Code), SQL (Structured Query Language), I believe Lisp stands for list processing. The pattern was clear: names conveyed purpose or origin. Somewhere around the 2010s, a memetic virus infected software engineering. Perhaps it started innocently, developers tired of boring corporate naming conventions wanted personality in their open-source projects? maybe. A few quirky names were charming. A database named after an animal? Sure, MongoDB (from “humongous”) at least has etymological connection to its purpose, even if “Mongo” became the shorthand.

But we didn’t stop there. We kept going. Now we’re drowning in a zoo of meaningless appellations where the connection between name and function has been severed entirely. Probably pattern accelerated with the rise of GitHub and startup culture. Everyone wanted to be the next Google, a meaningless name that became iconic through market dominance. Google could afford to build brand recognition through billions in advertising and becoming a verb. Your MIT-licensed file parser with 45 GitHub stars cannot.

The cognitive tax

Every obscure name is a transaction cost levied on every developer who encounters it. When you see “libsodium,” you must context-switch from problem-solving mode to detective mode: “What does this do? Let me check the README. Ah, it’s a crypto library. Why is it called sodium? Because chemistry? Because NaCl? Clever, I suppose.” Now, multiply this by dozens of dependencies in a modern project. Each one demands tribute: a few seconds of mental processing to decode the semantic cipher. Those seconds accumulate into minutes and effort, then career-spanning mountains of wasted cognitive effort.

Imagine that you are to explain to a new engineer how is your codebase structured, and the general architecture of some project, and go through the dependencies you use to delegate some certain tasks to and explain how they orchestrate together. Actually let me put my friend’s statement again instead: “We’re using Viper for configuration management, which feeds into Cobra for the CLI, and then Melody handles our WebSocket connections, Casbin manages permissions, all through Asynq for our job queue”.

Now pause and actually process that sentence. there’s a snake, another snake, music, a mysterious proper noun, and… async-with-a-q? Half your mental RAM is busy pattern-matching these arbitrary tokens to their actual functions instead of focusing on the architectural decisions being discussed. This is the equivalent of a cardiologist saying “we’ll install a Butterfly in your Whisper to improve your Thunderbeat” instead of “we’ll place a stent in your artery to improve your cardiac output.” Compare this to reading a scientific paper in materials science. When you encounter “high-entropy alloys” or “shape-memory polymers,” the name itself conveys information. You can make educated guesses about properties and applications before reading a single word of description.

Some excuses I’ve heard of

I was told some of the following when I shared my concerns about naming before , here’s my thoughts of some:

  • “But memorable names help with marketing!”

    Sure, if you’re building a consumer product. Your HTTP client, cli utility helper, whatever library is not a consumer product. The people who will ever care about it just want to know what it does.

  • “Descriptive names are boring!”

    Yes, and surgical instruments are boring. Boring is fine when clarity is paramount. This isn’t creative writing class.

  • “It’s just for fun!” Your fun has externalities.

    Every person who encounters your “fun” name pays a small tax. Across the industry, these taxes compound into significant waste And yeah, I know, this is not our biggest concern nor the biggest problem the industry is facing now, but sorry I had to talk about it. .

  • “All the good descriptive names are taken!”

    We could have used namespaces, prefixes, or compound terms like every other engineering discipline has done for centuries. We have the technology. But even if you can not do that, at least make the name has anything to do with the product, put a “DB” post-fix, do some word play like how “magit” does. If you couldn’t not be clear, you could have made it relatable at least.

The path forward

Whatever happened wasn’t malicious—it was cultural. As programming shifted from corporate mainframe work to the community builders. Which was a good thing. , the social norms shifted too.

Thus, we need a cultural correction. Not regulation open source thrives on freedom, but a revival of professional standards through social pressure and education.

Name your library after what it does. Use compound terms. Embrace verbosity if necessary. http-request-validator is infinitely superior to “zephyr” when someone is scanning dependencies at 2 AM debugging a production incident.

If you absolutely must have a cute mascot or reference, fine-make it the project mascot, not the name. PostgreSQL has Slonik the elephant. PostgreSQL is still called PostgreSQL. The elephant doesn’t replace semantic meaning.

Reserve the creative names for end-user products where branding matters. For infrastructure, tools, and libraries, choose clarity. Every time.

The next time you’re about to name your project after your favorite anime character, pause. Ask yourself: “Would a civil engineer name a bridge support system this way?” If the answer is no, choose a better name.

Our field deserves better than a zoo of random nouns masquerading as professional nomenclature. Clarity isn’t boring, it’s respect for your users’ time and cognitive resources.


Some works I recommend engaging with:

I seek refuge in God, from Satan the rejected. Generated by: Emacs 30.2 ( Org mode 9.7.34). Written by: Salih Muhammed, by the date of: 2025-12-11 Thu 18:27. Last build date: 2025-12-11 Thu 19:37.

GPT-5.2

Hacker News
platform.openai.com
2025-12-11 18:04:47
Comments...

Litestream VFS

Hacker News
fly.io
2025-12-11 17:59:10
Comments...
Original Article
Image by Annie Ruygt

I’m Ben Johnson, and I work on Litestream at Fly.io. Litestream is the missing backup/restore system for SQLite. It’s free, open-source software that should run anywhere, and you can read more about it here .

Again with the sandwiches: assume we’ve got a SQLite database of sandwich ratings, and we’ve backed it up with Litestream to an S3 bucket.

Now, on our local host, load up AWS credentials and an S3 path into our environment. Open SQLite and:

$ sqlite3
SQLite version 3.50.4 2025-07-30 19:33:53
sqlite> .load litestream.so
sqlite> .open file:///my.db?vfs=litestream

SQLite is now working from that remote database, defined by the Litestream backup files in the S3 path we configured. We can query it:

sqlite> SELECT * FROM sandwich_ratings ORDER BY RANDOM() LIMIT 3 ; 
22|Veggie Delight|New York|4
30|Meatball|Los Angeles|5
168|Chicken Shawarma Wrap|Detroit|5

This is Litestream VFS. It runs SQLite hot off an object storage URL. As long as you can load the shared library our tree builds for you, it’ll work in your application the same way it does in the SQLite shell.

Fun fact: we didn’t have to download the whole database to run this query. More about this in a bit.

Meanwhile, somewhere in prod, someone has it in for meatball subs and wants to knock them out of the bracket – oh, fuck:

sqlite> UPDATE sandwich_ratings SET stars = 1 ;

They forgot the WHERE clause!

sqlite> SELECT * FROM sandwich_ratings ORDER BY RANDOM() LIMIT 3 ; 
97|French Dip|Los Angeles|1
140|Bánh Mì|San Francisco|1
62|Italian Beef|Chicago|1

Italian Beefs and Bánh Mìs, all at 1 star. Disaster!

But wait, back on our dev machine:

sqlite> PRAGMA litestream_time = '5 minutes ago'; 
sqlite> select * from sandwich_ratings ORDER BY RANDOM() LIMIT 3 ; 
30|Meatball|Los Angeles|5
33|Ham & Swiss|Los Angeles|2
163|Chicken Shawarma Wrap|Detroit|5

We’re now querying that database from a specific point in time in our backups. We can do arbitrary relative timestamps, or absolute ones, like 2000-01-01T00:00:00Z .

What we’re doing here is instantaneous point-in-time recovery (PITR), expressed simply in SQL and SQLite pragmas.

Ever wanted to do a quick query against a prod dataset, but didn’t want to shell into a prod server and fumble with the sqlite3 terminal command like a hacker in an 80s movie? Or needed to do a quick sanity check against yesterday’s data, but without doing a full database restore? Litestream VFS makes that easy. I’m so psyched about how it turned out.

How It Works

Litestream v0.5 integrates LTX , our SQLite data-shipping file format. Where earlier Litestream blindly shipped whole raw SQLite pages to and from object storage, LTX ships ordered sets of pages. We built LTX for LiteFS , which uses a FUSE filesystem to do transaction-aware replication for unmodified applications, but we’ve spent this year figuring out ways to use LTX in Litestream, without all that FUSE drama.

The big thing LTX gives us is “compaction”. When we restore a database from object storage, we want the most recent versions of each changed database page. What we don’t want are all the intermediate versions of those pages that occurred prior to the most recent change.

Imagine, at the time we’re restoring, we’re going to need pages 1, 2, 3, 4, and 5. Depending on the order in which pages were written, the backup data set might look something like 1 2 3 5 3 5 4 5 5 . What we want is the rightmost 5, 4, 3, 2, and 1, without wasting time on the four “extra” page 5’s and the one “extra” page 3. Those “extra” pages are super common in SQLite data sets; for instance, every busy table with an autoincrementing primary key will have them.

LTX lets us skip the redundant pages, and the algorithm is trivial: reading backwards from the end of the sequence, skipping any page you already read. This drastically accelerates restores.

But LTX compaction isn’t limited to whole databases. We can also LTX-compact sets of LTX files. That’s the key to how PITR restores with Litestream now work.

In the diagram below, we’re taking daily full snapshots. Below those snapshots are “levels” of changesets: groups of database pages from smaller and smaller windows of time. By default, Litestream uses time intervals of 1 hour at the highest level, down to 30 seconds at level 1. L0 is a special level where files are uploaded every second, but are only retained until being compacted to L1.

Now, let’s do a PITR restore. Start from the most proximal snapshot. Then determine the minimal set of LTX files from each level to reach the time you are restoring to.

We have another trick up our sleeve.

LTX trailers include a small index tracking the offset of each page in the file. By fetching only these index trailers from the LTX files we’re working with (each occupies about 1% of its LTX file), we can build a lookup table of every page in the database. Since modern object storage providers all let us fetch slices of files, we can perform individual page reads against S3 directly.

Anatomy of an LTX file

How It’s Implemented

SQLite has a plugin interface for things like this: the “VFS” interface. VFS plugins abstract away the bottom-most layer of SQLite, the interface to the OS. If you’re using SQLite now, you’re already using some VFS module, one SQLite happens to ship with.

For Litestream users, there’s a catch. From the jump, we’ve designed Litestream to run alongside unmodified SQLite applications. Part of what makes Litestream so popular is that your apps don’t even need to know it exists. It’s “just” a Unix program.

That Litestream Unix program still does PITR restores, without any magic. But to do fast PITR-style queries straight off S3, we need more. To make those queries work, you have to load and register Litestream’s VFS module.

But that’s all that changes.

In particular: Litestream VFS doesn’t replace the SQLite library you’re already using. It’s not a new “version” of SQLite. It’s just a plugin for the SQLite you’re already using.

Still, we know that’s not going to work for everybody, and even though we’re really psyched about these PITR features, we’re not taking our eyes off the ball on the rest of Litestream. You don’t have to use our VFS library to use Litestream, or to get the other benefits of the new LTX code.

The way a VFS library works, we’re given just a couple structures, each with a bunch of methods defined on them. We override only the few methods we care about. Litestream VFS handles only the read side of SQLite. Litestream itself, running as a normal Unix program, still handles the “write” side. So our VFS subclasses just enough to find LTX backups and issue queries.

With our VFS loaded, whenever SQLite needs to read a page into memory, it issues a Read() call through our library. The read call includes the byte offset at which SQLite expected to find the page. But with Litestream VFS, that byte offset is an illusion.

Instead, we use our knowledge of the page size along with the requested page number to do a lookup on the page index we’ve built. From it, we get the remote filename, the “real” byte offset into that file, and the size of the page. That’s enough for us to use the S3 API’s Range header handling to download exactly the block we want.

To save lots of S3 calls, Litestream VFS implements an LRU cache. Most databases have a small set of “hot” pages — inner branch pages or the leftmost leaf pages for tables with an auto-incrementing ID field. So only a small percentage of the database is updated and queried regularly.

We’ve got one last trick up our sleeve.

Quickly building an index and restore plan for the current state of a database is cool. But we can do one better.

Because Litestream backs up (into the L0 layer) once per second, the VFS code can simply poll the S3 path, and then incrementally update its index. The result is a near-realtime replica. Better still, you don’t need to stream the whole database back to your machine before you use it.

Eat Your Heart Out, Marty McFly

Litestream holds backup files for every state your database has been in, with single-second resolution, for as long as you want it to. Forgot the WHERE clause on a DELETE statement? Updating your database state to where it was an hour (or day, or week) ago is just a matter of adjusting the LTX indices Litestream manages.

All this smoke-and-mirrors of querying databases without fully fetching them has another benefit: it starts up really fast! We’re living an age of increasingly ephemeral servers, what with the AIs and the agents and the clouds and the hoyvin-glavins. Wherever you find yourself, if your database is backed up to object storage with Litestream, you’re always in a place where you can quickly issue a query.

As always, one of the big things we think we’re doing right with Litestream is: we’re finding ways to get as much whiz-bang value as we can (instant PITR reading live off object storage: pretty nifty!) while keeping the underlying mechanism simple enough that you can fit your head around it.

Litestream is solid for serious production use (we rely on it for important chunks of our own Fly.io APIs). But you could write Litestream yourself, just from the basic ideas in these blog posts. We think that’s a point in its favor. We land there because the heavy lifting in Litestream is being done by SQLite itself, which is how it should be.

The story of Propolice

Lobsters
miod.online.fr
2025-12-11 17:58:36
Comments...
Original Article

The story of Propolice

As you may remember, during a good 15 years, every OpenBSD release came with one or sometimes a few songs or musical pieces . If you have been enjoying these songs, you might remember a line, near the beginning of the 5.7 song, ``Source fish'' , which says: ``Got the Propolice in the GCC''.

Come to think of it, while old-timers will immediately recognize what the lyrics are referring to, the name ``Propolice'' has slowly fallen into oblivion, and I wouldn't be surprised if many people, nowadays, do not have a clue about this.

Allow me to fill the gaps.

Behold, the story of Propolice.


Our story starts in 1998. The dotcom Internet bubble is at its early stages. At this years' Usenix technical conference, a team of fresh graduates from the Oregon Institute of Science and Technology, led by Crispin Cowan , present StackGuard , a compiler patch (against gcc 2.7.2.2, back then), which is designed to mitigate control flow attacks causing the return address on stack to be overwritten.

This hardened code generation is used to build a variant of the already popular Red Hat Linux distribution, called Immunix .

This presentation unfortunately mostly falls on deaf ears, Immunix remaining a niche distribution. These deaf ears include the gcc developers, even though the StackGuard changes soon get ported to the (much more reactive) egcs project, which will eventually take over gcc development as of gcc 2.95 in 1999.

One of the reasons for the lack of interest may be the non-portability of the gcc patch. Quoting from the Usenix paper: ``The changes are architecture-specific (in our case, i386), but since the total changes are under 100 lines of gcc, portability is not a major concern.''


Enters Hiroaki Etoh . Working in the IBM Technical Research Labs in Japan, he decides to write his own version of the StackGuard patch, but in a portable way. In addition to introducing a similar checked value on the stack, his diff also reorders local variables of functions to put the non-array variables at lower stack addresses, to also prevent them from being overwritten by a stack buffer overflow.

The entirety of the Propolice code lies in the machine-independent parts of the compiler, unlike StackGuard. It operates on the compiler's internal representation of the code (RTL), and as such is expected to work on all hardware platforms supported by gcc. However, being placed in a more generic code area, and with the extra logic to reorder local variables, the patch is much larger than StackGuard; the core of the changes are put in a new protector.c file, which is over 2,000 lines (including comments).

A paper describing this work is published in june 2000 by Hiroaki Etoh and Kunikau Yoda .

A few months later, Hioraki Etoh sends his diff to the gcc-patches mailing list, where (surprise!) it falls on deaf ears, maybe because it is based on egcs 1.1.2, which is a bit old. His mail mentions the Propolice web page as well.

The next month, he sends an updated diff, first targeting the latest public gcc snapshot, then targeting the up-to-date gcc, version 2.95. The feedback is mostly negative, gcc developers preferring the work on compile-time bounds checking, and the patch, much larger than StackGuard, is considered too intrusive to be worth considering.

Nevertheless, Hiroaki Etoh does not give up and keeps maintaining his patch, porting it to gcc 3.0 in june 2001, and also making work on 32-bit sparc systems running Solaris, to prove the portability of his approach. The Propolice web page is updated to reflect this.

The next year, On april 23rd, 2002, famous security researcher Ivan Arce from CORE-SDI in Argentina publishes a few ways to bypass the stack smashing protection . All existing technologies are affected... but Propolice.


Later this year, in june, the OpenBSD project releases a few security errata for OpenBSD 3.1 (and 3.0), and, disgruntled by these, starts switching its mindset from ``our work is to make the code bug-free'' to ``in addition to making the code bug-free, we should make exploitation as difficult as possible''. One of these first changes is to make the stack no longer executable on platforms where this can be done thanks to MMUs managing separate read and execute permissions. While thinking about stacks, one of the OpenBSD developers, Federico G. Schwindt , remembers StackGuard and discovers Hiroaki Etoh's work.

After giving it a successful run on an OpenBSD/i386 system, he sends a mail to Theo de Raadt , the OpenBSD project leader, to start a discussion about this. Theo tells him to involve more people, and that's how a mail with subject ``propolice diffs for gcc'' ends up in my mailbox (and a few other developers) on july 4th.

I become immediately enthusiastic about the promises of Propolice; but if we want to make use of it in OpenBSD, it needs to work on all our supported platforms, which at that time span 7 processor families: alpha , i386 , m68k , PowerPC , sparc (both 32-bit and 64-bit) and vax . In 2002, amd64 (or x86_64 if you prefer this ugly name) does not exist yet outside of AMD labs, arm and mips won't come back to OpenBSD until a few years, and neither the PA-Risc port ( hppa ) nor the m88k support are in working condition at that time.


One of the first systems I tried Propolice on was a venerable HP 9000/425t, a 25MHz 68040-based workstation running OpenBSD/hp300 (I did not yet have faster 68060 systems yet at this point). Unfortunately, the experiment would turn short quite quickly, when the Propolice-enabled compiler would fail to recompile itself, one of the intermediate programs used to produce backend-specific data (genattr) would always hit a segmentation fault and dump core.

Tinkering with the coredump and the `genattr` code, I was able to produce a simple reproducer on july 23rd:

Date: Tue, 23 Jul 2002 09:35:21 +0000
From: Miod Vallat
To: Federico Schwindt, Theo de Raadt
Subject: Propolice failure on m68k

/*
Propolice fails on m68K when you use alloca() in a routine, and then
invoke a subroutine that has enough local variables.

Here is a simple test program that exhibits the problem. It causes a
core dump on m68k at any optimization level (well, I only tested -O0 and
-O2) on m68K with your Propolice diffs applied to gcc. If you remove the
spacefiller variable inside the subroutine, it will work nicely. Same if
you trim the char[] variable down to 7 bytes (final \0 included in
count) or less.

Does it affect other arches too?

If it only affects m68K, I guess it would be easier to look at how
alloca() works on this arch and alter it so that it can coexist with
propolice.

Miod
*/

#include <stdlib.h>

int
depth(i)
        int i;
{
        char spacefiller[] = "1234567"; /* "123456" will pass */

        return (i);
}

int
main()
{
        char *data;

        data = alloca(42);
        return (depth(42));
}

Not being able to make progress from that (my knowledge of gcc internals was simply nonexistent in these days), we decided to get Hiroaki Etoh involved and asked him whether he would be willing to help us. Fortunately, he was, and an account for him was setup on my personal lab on the 24th:

Date: Wed, 24 Jul 2002 07:41:24 +0000
From: Miod Vallat
To: Hiroaki Etoh
Cc: Federico Schwindt
Subject: Account on an m68k machine

Hello,

  I have setup your account on my net. Access by ssh to "gentiane.org"
first, then from the machine you arrive on, "ssh -1 epi" will let you
reach a fast (for an m68k) hp300 machine. Please use ssh -1 there to
save cpu pover...

  The compiler here is OpenBSD's gcc with your propolice changes, You
can also use the regular, non propolice, compiler, should that be
necessary, with "gcc -V2.95.3-NP".

  If you need anything, ask fgs or myself.

Regards,
Miod

(2025 note: one of the proofreaders of this article expressed some surprise at the mention of "ssh -1". In 2002, ssh protocol 1 was still widely in use, although it was strongly advised not to use it due to security flaws, and the default was to use protocol 2. However, protocol 2 was much more expensive, cpu-wise, than protocol 1 because of extra cryptographic work, and I was still using protocol 1 inside my (trusted) home network, to reduce the load on old systems. Fortunately, modern ssh implementations nowadays support Ed 25519, which is the best choice for slow machines and cheaper than the RSA computations used by protocol 1)

Later that day, Theo sent a mail to Hiroaki Etoh explaining what our (OpenBSD) plans with regard to Propolice were:

Date: Wed, 24 Jul 2002 02:49:15 -0600
From: Theo de Raadt
To: etoh@jp.ibm.com
Subject: propolice

i really hope we can get this stuff into the tree for all our
architectures.  but it must be for all, before it goes in -- disabled
at first.  then once it works on all, we will enable it for certain
parts.

all this must happen VERY fast for it to make our next release.

it is unlikely we can enable it for the entire tree by next release,
unless you and fgs work very hard at it... miod is a very good person
to help as well for testing --- his ethics in that sense are out of
control.  however, the schedule does kind of suck, time is getting
tight... there is a 2 month window to get it utterly completely
working on all arch's...

that is m68k, sparc, sparc64, alpha, i386, powerpc, vax
[hppa and m88k do not matter yet]

[...]

anyways, talk to you soon.  if you get m68k working, we can get you
access to other architectures.  like vax.  vax will be fun ;-) i have
a vax that is faster than a top of the line sparc 20, so it is not to
be mocked.

It took Hiroaki Etoh some time to get started, as he was not used to the OpenBSD source tree (although he had some experience with FreeBSD) and expected to be able to work in a way similar to Linux distributions, extracting a tarball of the official gcc sources and configuring it for the current system. I had to explain that the OpenBSD source repository contains a copy of the compiler sources, with some patches, not all of them having been accepted (or even submitted) upstream, and that he had to work on this tree in order to build a working OpenBSD compiler.

Once he had understood how to rebuild the compiler, things progressed quite fast. The Propolice patch, back then, did not expect (and did not cope with) the m68k pre-decrement and post-increment addressing modes used on the stack pointer. Because of this, it would end up miscomputing stack pointer-relative offsets, and then generate incorrect code.

Once the Propolice patch was made aware of these addressing modes, things went a bit further, but trying to compile part of the gcc support code (libgcc2.a) would end up with mysterious errors from the assembler:

rm -f tmplibgcc2.a
for name in _muldi3 _divdi3 _moddi3 _udivdi3 _umoddi3 _negdi2  _lshrdi3 _ashldi3 _ashrdi3 _ffsdi2  _udiv_w_sdiv _udivmoddi4 _cmpdi2 _ucmpdi2 _floatdidf _floatdisf  _fixunsdfsi _fixunssfsi _fixunsdfdi _fixdfdi _fixunssfdi _fixsfdi _fixxfdi _fixunsxfdi _floatdixf _fixunsxfsi  _fixtfdi _fixunstfdi _floatditf __gcc_bcmp _varargs __dummy _eprintf  _bb _shtab _clear_cache _trampoline __main _exit  _ctors _pure _guard;  do  echo ${name};  ./xgcc -B/usr/m68k-unknown-openbsd3.1/bin/ -B./ -I/usr/m68k-unknown-openbsd3.1/include -O2   -DIN_GCC    -O2   -O2 -save-temps -DOPENBSD_NATIVE -I./include   -g1 -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED  -fpic -I. -I/usr/src/gnu/egcs/gcc -I/usr/src/gnu/egcs/gcc/config -I/usr/src/gnu/egcs/gcc/../include -c -DL${name} -DUSE_COLLECT2 /usr/src/gnu/egcs/gcc/libgcc2.c -o ${name}.o;  if [ $? -eq 0 ] ; then true; else exit 1; fi;  ar rc tmplibgcc2.a ${name}.o;  rm -f ${name}.o; done
_muldi3
_divdi3
_moddi3
_udivdi3
_umoddi3
_negdi2
_lshrdi3
_ashldi3
_ashrdi3
_ffsdi2
_udiv_w_sdiv
_udivmoddi4
_cmpdi2
_ucmpdi2
_floatdidf
_floatdisf
_fixunsdfsi
_fixunssfsi
libgcc2.s: Assembler messages:
libgcc2.s:120: Error: cannot create floating-point number
libgcc2.s:127: Error: cannot create floating-point number
*** Error code 1

I asked Hiroaki Etoh again for help. It turns out that there was an endianness bug in his code deciding how to setup local variables on stack in some circumstances, and floating-point types larger than the register size ended up laid out in little-endian order, which was obviously not going to work on m68k, which is a big-endian architecture.

At the end of july, the Propolice-enabled compiler would pass the gcc regression testsuite as badly as the non-Propolice compiler (a large majority of the tests would pass, and those which did fail were already failing without the Propolice changes, so it was not to blame for these failures).

But the road was still not clear, as at the same time Matthieu Herrb started to test on PowerPC and quickly found that the Propolice-compiler built libc did not behave correctly either. He was able to narrow this to a libc file which had to be built with optimization disabled (or with Propolice disabled).

At that time, we also noticed that two developer OpenBSD accounts had been compromised in june, and I spent quite some time on forensics, as well as checking all the ssh connection logs over the last few months, and check with everyone whether the IP their connections originated from were legitimate. This temporarily distracted me from testing Propolice or doing any other OpenBSD-related work.

Nevertheless, by the middle of august, Propolice on m68k was considered stable, being able to rebuild itself, and build the complete OpenBSD/hp300 userland.

It was time to try another untested architecture: vax (my plans were to test the slowest platforms first, as any regression there would take time to get fixed and the fix to be validated).

Date: Mon, 2 Sep 2002 08:40:20 +0000
From: Miod Vallat
To: Hiroaki Etoh
Subject: vax

You can now login to "durolle".

/usr/src only contains the gnu/egcs/gcc subdirectory, with unpatched
sources. Feel free to play with it - I'll use another similar vax to
build a complete system with propolice in a few moments.

Miod

While Hiroaki Etoh was confronting his code to the Vax challenge (a CISC processor with many complex addressing modes, and a hardware controlled stack frame layout) on that middle-end, 55MHz VAXstation 4000/60 , it was time to work on tieing the loose ends and making a first candidate patch against the OpenBSD tree, and getting more people involved in testing and/or reviewing.

The plan was still to ship a Propolice-capable compiler in the upcoming 3.2 release, and maybe enable the stack protector on a few critical binaries, such as sshd.

With the release cutoff getting close, on september 27, it was decided that Propolice would not be included in the upcoming release, but in the next one.

[19:03] [=Sign-on=] etoh (etoh@[32.97.110.75]) entered group
[19:05] <deraadt> hi etoh
[19:05] <deraadt> the tree will unlock in about two weeks, and then we can make headway at getting propolice into the tree for real
[19:06] <deraadt> then in 6 months you can see a complete operating system ship with your code activated
[19:06] <deraadt> all our major cpu now work with it?  powerpc i386 sparc sparc64 vax m68k ?
[19:07] <etoh> I don't have an experience on sparc64 only.
[19:07] <deraadt> you mean only sparc64 is unconfirmed?
[19:07] <etoh> yes.
[19:08] <deraadt> that's cool.
[19:08] <etoh> Do you have sparc64 machine in miod lab?
[19:13] <deraadt> i think he's not around.

Three days later, Hiroaki Etoh acknowledged that his patch still needed some changes to run on 64-bit platforms:

[ 1:03] [=Sign-on=] etoh (etoh@[32.97.110.75]) entered group
[ 1:05] <deraadt> hi
[ 1:12] <etoh> hi
[ 1:14] <deraadt> so you need sparc64 access now, eh?
[ 1:15] <etoh> i can access sparc64 machine in miod's lab. i found that propolice does not work well on 64 bit machine.
[ 1:16] <etoh> i understand the problem, so i'll modify some code.
[ 1:17] <deraadt> oh, so alpha has a few issues too?
[ 1:18] <deraadt> sparc64 is big-endian 64 bit.  this creates quite a few different types of bugs than a little-endian 64 bit arch does.
[ 1:23] <etoh> the problem comes from the size of frame pointer register, not endian.
[ 1:24] <etoh> i understood there is no alpha system in openbsd platform, right?
[ 1:30] <jason> ? We run on alphas...
[ 1:30] <t> what?  alpha works.  well, mine doesn't....
[ 1:38] <etoh> all right. so, i have to test alpha too.

After a few changes to Propolice, a first, incomplete, patch integrating Propolice into the OpenBSD tree was shared among developers on october 8th:

Date: Tue, 8 Oct 2002 22:18:55 +0000
From: Miod Vallat
To: private mailing list
Subject: propolice meets the tree, step 1

The following diff puts the current version of propolice into the tree.
For those who missed the story, propolice is a stack attacks protection
built into the compiler, so that, if some flaw in the code results in
stack corruption, the program abort rather than continuing and possibly
running an exploit.

More details at
http://www.trl.ibm.com/projects/security/ssp

This diff adds propolice to our gcc, but does not enable it by default.
To explicitely compile something with propolice, use the
-fstack-protector option. To explicitely compile something without
propolice, use the -fno-stack-protector option.

So far, this version of propolice is known to work correctly on:
- i386
- m68k
- sparc
- vax
and to not work on:
- alpha (being worked on)
- sparc64 (being worked on)
Powerpc did not work last time checked, but numerous problems have been
fixed since and it's worth another try.

Ok, now before it sounds like I did all the work, propolice was first
spotted by fgsch@, who did the necessary OpenBSD modifications to the
libgcc changes. Hiroaki Etoh, the author, did the m68k and vax fixes,
and is working on more. I only behaved as a test loony.

[...]
If you are interested in building complete systems with propolice
enabled, don't do that out of the box. fgs and I have diffs to apply to
various parts of the tree (especially ld.so) to make this work. You have
been warned.

[...]

A second patch, this time complete, was shared on november 22nd:

Date: Fri, 22 Nov 2002 00:35:49 +0000
From: Miod Vallat
To: private mailing list
Cc: Hiroaki Etoh
Subject: Propolice meets the OpenBSD tree, part 2

This new diff brings -again- the latest propolice code into the main
OpenBSD tree. The diffs against the previous diff are:
- propolice is disabled by default. Use cc -fstack-protector to benefit
  of the code. This will eventually be turned on for more and more parts
  of the system.
- the stack smash handler attempts to log information about what
  happened. This won't work on chroot binaries unless /dev/log exists in
  the chroot jail, though, but there's no easy way to solve this.
- a.out and ELF ld.so diffs to be able to compile with propolice, even
  if some code is dummied.
- sys/ hooks to always disable propolice. You can change this to enable
  propolice in the kernel, but this will be over my dead, cold, body.

Tested on almost all arches (means, I have tested different versions of
this diff but it should be ok now).

[...]

I would really like this to go in the tree ASAP, as even with propolice
disabled, this affects gcc behaviour. Then we can enable protection
after some time, the goal being that 3.3 ships with propolice enabled at
least on the most sensitive parts of the system.

[...]

Known problems with stack protection so far:
- emacs port will not link. I'm working on a fix, but don't push me,
  it's emacs we are talking about.
- I have got some mitigated results with perl 5.8, builds fine on some
  arches, not as well on others, I'll need to have a closer look.

Fixed problems include 64 bit support, inline functions, and everything
fgs, naddy and I don't remember.

Send flames to me, technical questions to etoh with me and fgs cc'ed.

Miod

The patch was however not final as we were still tinkering with the __stack_smash_handler routine, which gets invoked upon function exit when the stack canary value has been modified. In the original Propolice diff, that function was added to the gcc runtime, libgcc.a ; but since it would use a few functions from libc to report the stack corruption and exit, this put an unexpected dependency on libc , while libgcc is supposed to be the last library on the linker commandline, and not depend upon anything.

That extra dependency had required us to add explicit libc dependency in several Makefile , for the few daemons we intended to compile with stack protection, which were statically linked, such as isakmpd(8) .

Eventually its location in OpenBSD was reconsidered, and it was moved to libc . The body of the __guard_setup function, which is used to compute the canary value upon startup, was also modified to use sysctl to get randomness, rather than reading from /dev/urandom , which may not exist if running within a chroot 'ed environment.

With that settled, a new diff was shared on december 29th, and since noone had any further changes to request, the Propolice patch landed in OpenBSD on december 2nd, enabled by default on all platforms.


Unfortunately (and predictibly), it was not long until we found the hard way that some problems had managed to creep in.

Date: Tue, 3 Dec 2002 07:52:30 +0000
From: Miod Vallat
To: private mailing list
Subject: First propolice in-tree problem...

It turns out there are issues with gnu bc again on alpha. Etoh has been
notified.

[...]

I was quite surprised that I had missed this during all my tests, so I investigated a bit, found why I had missed it, and shared an explanation:

Date: Tue, 3 Dec 2002 21:14:30 +0000
From: Miod Vallat
To: private mailing list
Subject: dc bug story

I found a workaround for the propolice problems in gnu/usr.bin/dc, as
well as the reason why it was not found earlier.

It turns out that make cleandir does not remove the generated fbc helper
utility. Hence I was happily building with an old compiled, thus
working, fbc binary. I only get bitten by the problem when I wiped my
/usr/obj tree.

As for the workaround, I tracked the problem to be in bc.c - only this
file needed to be compiled with -O0 with propolice for fbc to work. It
turns out this is a generated file, from bc.y, via bison.

Unfortunately it uses some gnuisms, so our in-tree flex can not process
it. Using the port of bison, I generated a new bc.c, fixed bc.h to make
the constants match again, and it built like a charm. Very simple tests
show no difference of behaviour.

[...]

Jason Downs was quick to point that I should process the bc.y file with yacc , not flex (doh!), and that was enough to get a working bc binary.

Further analysis showed that the old generated parser for bc used alloca() while the newly recompiled one didn't. Hiroaki Etoh quickly came with a simple fix to handle that situation anyway, and no changes in build machinery were needed after all.

The Propolice web page was updated to mention these recent achievements.

A few more bug fixes were added in the next few months (two or three per month), and then OpenBSD 3.3 was released with all userland binaries, as well as the binary packages for third-party software, compiled with stack protection, on all platforms... but OpenBSD/hppa .

Indeed, in early 2003, the OpenBSD/hppa porting effort made significant progress and that port become a reality. However, one of the (many) odd things about the PA-RISC architecture, is that the stack is architectured to grow up (towards larger addresses), unlike all other platforms.

When I mentioned this to Hiroaki Etoh, he was quick to realize there was no way to achieve a working stack protection:

From: Hiroaki Etoh
Date: Fri, 10 Jan 2003 11:21:01 +0900
To: Miod Vallat
Subject: Re: "make build" policy, and a few other notes

[...]
< The main thing to know about hppa, is that the stack grows up, contrary
< to most other platforms, and this is probably where propolice has
< issues.

oh my god.
Do you know that propolice can not protect the following on the processor?

void bar (char *p) {
    gets(p);     /* buffer overflow */
}
int foo () {
   char buf[20];
   bar (buf);
}

In this case, the overflow damages the frame region of "bar", it means the
return address in it can be compromised.
To avoid this situation, we can put a canary on top of the stack and check
the integrity of the canary every time before a function call.  BUT it is
a great overhead.

[...]

The only reasonable choice to make was to keep Propolice disabled on hppa.


After the OpenBSD 3.3 release, other software projects (especially Linux distributions, starting with Gentoo) followed the OpenBSD example and started to add Propolice to their system compiler and enable it on key software.

Occasional problems and fixes continued until shortly before the OpenBSD 3.6 release, with a last bug exposed by the mysql testsuite.

Eventually, as part of changes in gcc 4 to rewrite, in a cleaner way, the code responsible to setup call frames, a "smoother" reimplementation of the protection, in much fewer lines of code , was created in 2005: This proposal got refined a few weeks later. This time, there were no objections to it, and it got merged shortly afterwards; gcc 4.1 , released in february 2006, was the first release with the feature available.


After that, Hiroaki Etoh no longer had to work on his patch. The latest patches he published were for gcc 3.3 and 3.4. And then he went on to work on other projects.

Consequently, the Propolice page on IBM's web site received no updates after august 2005, was moved to http://www.research.ibm.com/trl/projects/security/ssp/ in december 2011, before eventually disappearing in july 2014. You can visit the last available version archived by the wayback machine.

After all these years, the name Propolice had disappeared.

But its work is done, and its legacy is there - in this day and age, cheap stack protection and smart local variable reordering are available in all serious compilers, and the computing world is a (slightly) safer place because of it.

[$] Toward a policy for machine-learning tools in kernel development

Linux Weekly News
lwn.net
2025-12-11 17:57:52
The first topic of discussion at the 2025 Maintainers Summit has been in the air for a while: what role — if any — should machine-learning-based tools have in the kernel development process? While there has been a fair amount of controversy around these tools, and concerns remain, it seems that the...
Original Article

The page you have tried to view ( Toward a policy for machine-learning tools in kernel development ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 25, 2025)

Cosmic Desktop is a fantastic first draft

Lobsters
www.youtube.com
2025-12-11 17:28:05
Comments...

Show HN: SIM – Apache-2.0 n8n alternative

Hacker News
github.com
2025-12-11 17:20:11
Comments...
Original Article

Sim Logo

Build and deploy AI agent workflows in minutes.

Sim.ai Discord Twitter Documentation

Build Workflows with Ease

Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.

Workflow Builder Demo

Supercharge with Copilot

Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.

Copilot Demo

Integrate Vector Databases

Upload documents to a vector store and let agents answer questions grounded in your specific content.

Knowledge Uploads and Retrieval Demo

Quickstart

Cloud-hosted: sim.ai

Sim.ai

Self-hosted: NPM Package

http://localhost:3000

Note

Docker must be installed and running on your machine.

Options

Flag Description
-p, --port <port> Port to run Sim on (default 3000 )
--no-pull Skip pulling latest Docker images

Self-hosted: Docker Compose

# Clone the repository
git clone https://github.com/simstudioai/sim.git

# Navigate to the project directory
cd sim

# Start Sim
docker compose -f docker-compose.prod.yml up -d

Access the application at http://localhost:3000/

Using Local Models with Ollama

Run Sim with local AI models using Ollama - no external APIs required:

# Start with GPU support (automatically downloads gemma3:4b model)
docker compose -f docker-compose.ollama.yml --profile setup up -d

# For CPU-only systems:
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -d

Wait for the model to download, then visit http://localhost:3000 . Add more models with:

docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8b

Using an External Ollama Instance

If you already have Ollama running on your host machine (outside Docker), you need to configure the OLLAMA_URL to use host.docker.internal instead of localhost :

# Docker Desktop (macOS/Windows)
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

# Linux (add extra_hosts or use host IP)
docker compose -f docker-compose.prod.yml up -d  # Then set OLLAMA_URL to your host's IP

Why? When running inside Docker, localhost refers to the container itself, not your host machine. host.docker.internal is a special DNS name that resolves to the host.

For Linux users, you can either:

  • Use your host machine's actual IP address (e.g., http://192.168.1.100:11434 )
  • Add extra_hosts: ["host.docker.internal:host-gateway"] to the simstudio service in your compose file

Using vLLM

Sim also supports vLLM for self-hosted models with OpenAI-compatible API:

# Set these environment variables
VLLM_BASE_URL=http://your-vllm-server:8000
VLLM_API_KEY=your_optional_api_key  # Only if your vLLM instance requires auth

When running with Docker, use host.docker.internal if vLLM is on your host machine (same as Ollama above).

Self-hosted: Dev Containers

  1. Open VS Code with the Remote - Containers extension
  2. Open the project and click "Reopen in Container" when prompted
  3. Run bun run dev:full in the terminal or use the sim-start alias
    • This starts both the main application and the realtime socket server

Self-hosted: Manual Setup

Requirements:

Note: Sim uses vector embeddings for AI features like knowledge bases and semantic search, which requires the pgvector PostgreSQL extension.

  1. Clone and install dependencies:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install
  1. Set up PostgreSQL with pgvector:

You need PostgreSQL with the vector extension for embedding support. Choose one option:

Option A: Using Docker (Recommended)

# Start PostgreSQL with pgvector extension
docker run --name simstudio-db \
  -e POSTGRES_PASSWORD=your_password \
  -e POSTGRES_DB=simstudio \
  -p 5432:5432 -d \
  pgvector/pgvector:pg17

Option B: Manual Installation

  1. Set up environment:
cd apps/sim
cp .env.example .env  # Configure with required variables (DATABASE_URL, BETTER_AUTH_SECRET, BETTER_AUTH_URL)

Update your .env file with the database URL:

DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"
  1. Set up the database:

First, configure the database package environment:

cd packages/db
cp .env.example .env 

Update your packages/db/.env file with the database URL:

DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"

Then run the migrations:

bunx drizzle-kit migrate --config=./drizzle.config.ts
  1. Start the development servers:

Recommended approach - run both servers together (from project root):

This starts both the main Next.js application and the realtime socket server required for full functionality.

Alternative - run servers separately:

Next.js app (from project root):

Realtime socket server (from apps/sim directory in a separate terminal):

cd apps/sim
bun run dev:sockets

Copilot API Keys

Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:

  • Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
  • Set COPILOT_API_KEY environment variable in your self-hosted apps/sim/.env file to that value

Environment Variables

Key environment variables for self-hosted deployments (see apps/sim/.env.example for full list):

Variable Required Description
DATABASE_URL Yes PostgreSQL connection string with pgvector
BETTER_AUTH_SECRET Yes Auth secret ( openssl rand -hex 32 )
BETTER_AUTH_URL Yes Your app URL (e.g., http://localhost:3000 )
NEXT_PUBLIC_APP_URL Yes Public app URL (same as above)
ENCRYPTION_KEY Yes Encryption key ( openssl rand -hex 32 )
OLLAMA_URL No Ollama server URL (default: http://localhost:11434 )
VLLM_BASE_URL No vLLM server URL for self-hosted models
COPILOT_API_KEY No API key from sim.ai for Copilot features

Troubleshooting

Ollama models not showing in dropdown (Docker)

If you're running Ollama on your host machine and Sim in Docker, change OLLAMA_URL from localhost to host.docker.internal :

OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -d

See Using an External Ollama Instance for details.

Database connection issues

Ensure PostgreSQL has the pgvector extension installed. When using Docker, wait for the database to be healthy before running migrations.

Port conflicts

If ports 3000, 3002, or 5432 are in use, configure alternatives:

# Custom ports
NEXT_PUBLIC_APP_URL=http://localhost:3100 POSTGRES_PORT=5433 docker compose up -d

Tech Stack

Contributing

We welcome contributions! Please see our Contributing Guide for details.

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Made with ❤️ by the Sim Team

UK fines LastPass over 2022 data breach impacting 1.6 million users

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 17:09:00
The UK Information Commissioner's Office (ICO) fined the LastPass password management firm £1.2 million for failing to implement security measures that allowed an attacker to steal personal information and encrypted password vaults belonging to up to 1.6 million UK users in a 2022 breach. [...]...
Original Article

LastPass

The UK Information Commissioner's Office (ICO) fined the LastPass password management firm £1.2 million for failing to implement security measures that allowed an attacker to steal personal information and encrypted password vaults belonging to up to 1.6 million UK users in a 2022 breach.

According to the ICO, the incident stemmed from two interconnected breaches starting in August 2022.

The first breach occurred in August 2022, when a hacker compromised a LastPass employee's laptop and accessed portions of the company's development environment.

While no personal data was taken during this incident, the attacker was able to obtain the company's source code, proprietary technical information, and encrypted company credentials. LastPass initially believed the breach was contained because the decryption keys for these credentials were stored separately in the vaults of four senior employees.

However, the following day, the attacker targeted one of those senior employees by exploiting a known vulnerability in a third-party streaming application, believed to be Plex , which was installed on the employee's personal device.

This access allowed the hacker to deploy malware, capture the employee's master password using a keylogger, and bypass multi-factor authentication using an already MFA-authenticated cookie.

Because the employee used the same master password for both personal and business vaults, the attacker was able to access the business vault and steal an Amazon Web Services access key and a decryption key .

These keys, combined with the previously stolen information, allowed the attackers to breach the cloud storage firm GoTo and steal LastPass database backups stored on the platform.

Customer data stolen in breach

Personal information stored in the stolen database included encrypted password vaults , names, email addresses, phone numbers, and website URLs associated with customer accounts.

"The threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service," explained LastPass CEO Karim Toubba at the time.

"The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data."

The ICO claimed that the attacker did not decrypt customer password vaults, as LastPass' "Zero Knowledge architecture" does not know or store the master passwords used to decrypt vaults, and they are known only to customers.

However, LastPass previously warned that the security of encrypted vaults depended on the strength of a customer's master password, advising that weaker passwords be reset.

"Depending on the length and complexity of your master password and iteration count setting, you may want to reset your master password," reads a LastPass support bulletin about the cyberattack.

This is because GPU-powered brute-force attacks can crack weak master passwords used to encrypt vaults, allowing threat actors to gain access to them.

Some researchers claim this already occurred , stating their research indicates LastPass vaults with weak passwords were decrypted to conduct cryptocurrency theft attacks.

Password security tips

Information Commissioner John Edwards said that while password managers remain a critical tool for security, companies offering such services must ensure access controls and internal systems are hardened against targeted attacks.

He emphasized that LastPass customers had a reasonable expectation that their personal information would be protected and that the company failed to meet this obligation, leading to the penalty announced today.

The ICO encourages organizations to review their device security , remote work risks , and access restrictions.

Customers should also make sure they are using strong, complex passwords, which LastPass recommends be at least 12 characters and include upper- and lowercase letters, numbers, symbols, and special characters.

However, in attacks like these, where increased computational power and offline cracking can occur, it is safer to use a master password of at least 16 characters [ 1 , 2 ] or a long multi-word passphrase to secure highly sensitive information, such as password vaults.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

AIs Exploiting Smart Contracts

Schneier
www.schneier.com
2025-12-11 17:06:05
I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature. Here’s some interesting research on training AIs to automatically exploit smart contracts: AI models are increasingly good at cyber tasks, as we’ve written about before. But ...
Original Article

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before . But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench) ­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.

Tags: , , ,

Posted on December 11, 2025 at 12:06 PM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

Days since last GitHub incident

Hacker News
github-incidents.pages.dev
2025-12-11 16:52:37
Comments...
Original Article
Days since last Github service disruption: 0

Disney Invests $1 Billion in the AI Slopification of Its Brand

403 Media
www.404media.co
2025-12-11 16:48:39
With OpenAI investment, Disney will officially begin putting AI slop into its flagship streaming product....
Original Article

The first thing I saw this morning when I opened X was an AI-generated trailer for Avengers: Doomsday . Robert Downey Jr’s Doctor Doom stood in a shapeless void alongside Captain America and Reed Richards. It was obvious slop but it was also close in tone and feel of the last five years of Disney’s Marvel movies. As media empires consolidate, nostalgia intensifies, and AI tools spread, Disney’s blockbusters feel more like an excuse to slam recognizable characters together in a contextless morass.

So of course Disney has announced it signed a deal with OpenAI today that will soon allow fans to make their own officially licensed Disney slop using Sora 2. The house that mouse built, and which has been notoriously protective of its intellectual property, opened up the video generator, saw the videos featuring Nazi Spongebob and criminal Pikachu , and decided: We want in.

According to a press release, the deal is a 3 year licensing agreement that will allow the AI company’s short form video platform Sora to generate slop videos using characters like Mickey Mouse and Iron Man. As part of the agreement, Disney is investing $1 billion of equity into OpenAI, said it will become a major customer of the company, and promised that fan and corporate AI-generated content would soon come to Disney+, meaning that Disney will officially begin putting AI slop into its flagship streaming product.

The deal extends to ChatGPT as well and, starting in early 2026, users will be able to crank out officially approved Disney slop on multiple platforms. When Sora 2 launched in October, it had little to no content moderation or copyright guidelines and videos of famous franchise characters doing horrible things flooded the platform. Pikachu stole diapers from a CVS, Rick and Morty pushed crypto currencies, and Disney characters shouted slurs in the aisles of Wal-Mart.

It is worth mentioning that, although Disney has traditionally been extremely protective of its intellectual property, the company’s princesses have become one of the most common fictional subjects of AI porn on the internet; 404 Media has found at least three different large subreddits dedicated to making AI porn of characters like Elsa, Snow White, Rapunzel, and Tinkerbell. In this case, Disney is fundamentally throwing its clout behind a technology that has thus far most commonly been used to make porn of its iconic characters.

After the hype of the launch, OpenAI added an “opt-in” policy to Sora that was meant to prevent users from violating the rights of copyright holders. It’s trivial to break this policy however, and circumvent the guardrails preventing a user from making a lewd Mickey Mouse cartoon or episode of The Simpsons . The original sin of Sora and other AI systems is that the training data is full of copyrighted material and the models cannot be retrained without great cost, if at all.

If you can’t beat the slop, become the slop.

“The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works,” Bob Iger, CEO of Disney, said in the press release about the agreement.

The press release explained that Sora users will soon have “official” access to 200 characters in the Disney stable, including Loki, Thanos, Darth Vader, and Minnie Mouse. In exchange, Disney will begin to use OpenAI’s APIs to “build new products” and it will deploy “ChatGPT for its employees.”

I’m imagining a future where AI-generated fan trailers of famous characters standing next to each other in banal liminal spaces is the norm. People have used Sora 2 to generate some truly horrifying videos , but the guardrails have become more aggressive. As Disney enters the picture, I imagine the platform will become even more anodyne. Persistent people will slip through and generate videos of Goofy and Iron Man sucking and fucking, sure, but the vast majority of what’s coming will be safe corporate gruel that resembles a Marvel movie.

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

Things I want to say to my boss

Hacker News
www.ithoughtaboutthatalot.com
2025-12-11 16:35:48
Comments...
Original Article

I’m sitting down to write this in a gap between jobs. The downtime is strange, like the world has stopped moving but my thoughts haven’t caught up. Other than replaying the shit that went down during the last six months – or to put it more bluntly, the reasons I left, I don’t quite know what to do with myself.

What happened wasn’t unique. And that’s the part that bothers me most.

It’s the same stuff I hear from friends, colleagues, people I trust across the industry.

I know this is anonymous, but if you think this is about you, then I hope you do your team a favour and listen.

It’s the performance of ‘care’ from leadership. Saying one thing loudly and proudly, yet doing another quietly, repeatedly.

I know this is anonymous, but if you think this is about you, then I hope you do your team a favour and listen.

The things I wish I could say

You can’t fake care. People feel it. In small moments, in the gaps between your words, in the way you prioritise your business over their wellbeing. Care is a practice, not a performance. If you only care when outsiders are watching, you’re just performing.

Communication isn’t optional or a one-way thing. Consistency and honesty build trust. Inconsistency and silence destroy it. If you communicate more externally than with your team, your culture will break down slowly over time.

Ideas stop being shared because “what’s the point?” It’s not like you’re really listening. Meetings become quieter because speaking up feels risky. Colleagues start shrinking, not because their talent fades, but because the space to use it gets narrower.

I hope you learn that leadership is more than LinkedIn posts and conference talks.

It’s the day-to-day choices you make when nobody’s applauding.

Burnout isn’t a sign of commitment, it’s a sign of organisational failure. If your best people are exhausted, withdrawn, or like shadows of who they once were, that’s not a resource problem. That’s a You problem.

By the time you notice a culture is broken, the damage has already been done. People have mentally checked out, or quietly left, or stayed but stopped believing.

What I hope (though I’m not holding my breath)

I hope you learn that leadership is more than LinkedIn posts and conference talks.

It’s the day-to-day choices you make when nobody’s applauding. It’s the way you treat people when they’re tired, honest, unwell or “inconvenient”. It’s whether your words match your actions, and whether you’re brave enough to admit when they don’t.

I hope you realise that people don’t leave because they’re unwilling. They leave because you didn’t take care of them. You don’t get to call yourself “people-first” when every decision proves otherwise.

I hope you learn that if you focus on making money instead of the team lining your pockets, you will end up with a broken team and no money.

What good leadership actually looks like

Good leadership isn’t complicated, but it is demanding. It asks more of you than your job title does. It asks for self-awareness, not slogans. It asks you to trade the armour of performance for the discomfort of being accountable.

In the end, good leadership is never proven by what you say about yourself. It’s proven by what people say when you’re not in the room.

And trust me, they’re talking.

It’s showing up before the crisis, not after. It’s noticing when someone’s energy changes and checking in, not waiting for them to break. It’s understanding the difference between being busy and being present.

It’s making decisions with people, not about them. It’s protecting your team from unnecessary chaos rather than generating it. It’s recognising that transparency isn’t a risk, but how trust stays alive.

It’s creating conditions where people want to speak — not because they’re brave, but because it’s safe. Where the loudest voices don’t automatically win.

It’s understanding that care is not soft. It’s not indulgent. It’s not a blocker to delivery. It’s the foundation that makes delivery possible. Care is the thing that keeps people willing to stay, to try, to believe. Care is taking responsibility for the things you say and do, and the culture that results in.

If you want loyalty, creativity, honesty, energy, you must earn them. You earn them by being the kind of leader whose actions make it obvious that people matter. Not because it’s good PR. Because it’s your job. And because people matter, and they deserve it.

In the end, good leadership is never proven by what you say about yourself. It’s proven by what people say when you’re not in the room.

And trust me, they’re talking.

I've given you too much of my time, attention and energy in 2025. So in 2026, I plan to do the opposite and not give you any more.

Announcing Rust 1.92.0

Lobsters
blog.rust-lang.org
2025-12-11 16:33:14
Comments...
Original Article

The Rust team is happy to announce a new version of Rust, 1.92.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup , you can get 1.92.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.92.0 .

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel ( rustup default beta ) or the nightly channel ( rustup default nightly ). Please report any bugs you might come across!

What's in 1.92.0 stable

Deny-by-default never type lints

The language and compiler teams continue to work on stabilization of the never type . In this release the never_type_fallback_flowing_into_unsafe and dependency_on_unit_never_type_fallback future compatibility lints were made deny-by-default, meaning they will cause a compilation error when detected.

It's worth noting that while this can result in compilation errors, it is still a lint; these lints can all be #[allow] ed. These lints also will only fire when building the affected crates directly, not when they are built as dependencies (though a warning will be reported by Cargo in such cases).

These lints detect code which is likely to be broken by the never type stabilization. It is highly advised to fix them if they are reported in your crate graph.

We believe there to be approximately 500 crates affected by this lint. Despite that, we believe this to be acceptable, as lints are not a breaking change and it will allow for stabilizing the never type in the future. For more in-depth justification, see the Language Team's assessment .

unused_must_use no longer warns about Result<(), UninhabitedType>

Rust's unused_must_use lint warns when ignoring the return value of a function, if the function or its return type is annotated with #[must_use] . For instance, this warns if ignoring a return type of Result , to remind you to use ? , or something like .expect("...") .

However, some functions return Result , but the error type they use is not actually "inhabited", meaning you cannot construct any values of that type (e.g. the ! or Infallible types).

The unused_must_use lint now no longer warns on Result<(), UninhabitedType> , or on ControlFlow<UninhabitedType, ()> . For instance, it will not warn on Result<(), Infallible> . This avoids having to check for an error that can never happen.

use core::convert::Infallible;
fn can_never_fail() -> Result<(), Infallible> {
     ...
    Ok(())
}

fn main() {
    can_never_fail();
}

This is particularly useful with the common pattern of a trait with an associated error type, where the error type may sometimes be infallible:

trait UsesAssocErrorType {
    type Error;
    fn method(&self) -> Result<(), Self::Error>;
}

struct CannotFail;
impl UsesAssocErrorType for CannotFail {
    type Error = core::convert::Infallible;
    fn method(&self) -> Result<(), Self::Error> {
        Ok(())
    }
}

struct CanFail;
impl UsesAssocErrorType for CanFail {
    type Error = std::io::Error;
    fn method(&self) -> Result<(), Self::Error> {
        Err(std::io::Error::other("something went wrong"))
    }
}

fn main() {
    CannotFail.method();  No warning
    CanFail.method();  Warning: unused `Result` that must be used
}

Emit unwind tables even when -Cpanic=abort is enabled on linux

Backtraces with -Cpanic=abort previously worked in Rust 1.22 but were broken in Rust 1.23, as we stopped emitting unwind tables with -Cpanic=abort . In Rust 1.45 a workaround in the form of -Cforce-unwind-tables=yes was stabilized.

In Rust 1.92 unwind tables will be emitted by default even when -Cpanic=abort is specified, allowing for backtraces to work properly. If unwind tables are not desired then users should use -Cforce-unwind-tables=no to explicitly disable them being emitted.

Validate input to #[macro_export]

Over the past few releases, many changes were made to the way built-in attributes are processed in the compiler. This should greatly improve the error messages and warnings Rust gives for built-in attributes and especially make these diagnostics more consistent among all of the over 100 built-in attributes.

To give a small example, in this release specifically, Rust became stricter in checking what arguments are allowed to macro_export by upgrading that check to a "deny-by-default lint" that will be reported in dependencies .

Stabilized APIs

These previously stable APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust , Cargo , and Clippy .

Contributors to 1.92.0

Many people came together to create Rust 1.92.0. We couldn't have done it without all of you. Thanks!

Microsoft bounty program now includes any flaw impacting its services

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 16:00:46
Microsoft now pays security researchers for finding critical vulnerabilities in any of its online services, regardless of whether the code was written by Microsoft or a third party. [...]...
Original Article

Microsoft

Microsoft now pays security researchers for finding critical vulnerabilities in any of its online services, regardless of whether the code was written by Microsoft or a third party.

This policy shift was announced at Black Hat Europe on Wednesday by Tom Gallagher, vice president of engineering at Microsoft Security Response Center.

As Gallagher explained , attackers don't distinguish between Microsoft code and third-party components when exploiting vulnerabilities, prompting the company to expand its bug bounty program to cover all Microsoft online services by default, with all new services in scope as soon as they are released.

The program now also includes security flaws in third-party dependencies, including commercial or open-source components, if they impact Microsoft online services.

"Starting today, if a critical vulnerability has a direct and demonstrable impact to our online services, it’s eligible for a bounty award. Regardless of whether the code is owned and managed by Microsoft, a third-party, or is open source, we will do whatever it takes to remediate the issue," Gallagher said .

"Our goal is to incentivize research on the highest risk areas, especially the areas that threat actors are most likely to exploit.  Where no bounty programs exists, we will recognize and award the diverse insights of the security research community wherever their expertise takes them."

Microsoft has paid over $17 million in bounty awards to 344 security researchers over the last 12 months, and another $16.6 million to 343 security researchers during the previous year.

Today's announcement is part of Microsoft's broader Secure Future Initiative , designed to prioritize security across all of the company's operations.

As part of the same initiative, Microsoft also disabled all ActiveX controls in Windows versions of Microsoft 365 and Office 2024 apps, and has updated Microsoft 365 security defaults to block access to SharePoint, OneDrive, and Office files via legacy authentication protocols.

More recently, it began rolling out a new Teams feature to block screen capture attempts during meetings and announced plans to secure Entra ID sign-ins from script injection attacks.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Show HN: GPULlama3.java Llama Compilied to PTX/OpenCL Now Integrated in Quarkus

Hacker News
news.ycombinator.com
2025-12-11 15:59:33
Comments...
Original Article

wget https://github.com/beehive-lab/TornadoVM/releases/download/v... unzip tornadovm-2.1.0-opencl-linux-amd64.zip # Replace <path-to-sdk> manually with the absolute path of the extracted folder export TORNADO_SDK="<path-to-sdk>/tornadovm-2.1.0-opencl" export PATH=$TORNADO_SDK/bin:$PATH

tornado --devices tornado --version

# Navigate to the project directory cd GPULlama3.java

# Source the project-specific environment paths -> this will ensure the source set_paths

# Build the project using Maven (skip tests for faster build) # mvn clean package -DskipTests or just make make

# Run the model (make sure you have downloaded the model file first - see below) ./llama-tornado --gpu --verbose-init --opencl --model beehive-llama-3.2-1b-instruct-fp16.gguf --prompt "tell me a joke"

Porn Is Being Injected Into Government Websites Via Malicious PDFs

403 Media
www.404media.co
2025-12-11 15:56:21
Dozens of government websites have fallen victim to a PDF-based SEO scam, while others have been hijacked to sell sex toys....
Original Article

Dozens of government and university websites belonging to cities, towns, and public agencies across the country are hosting PDFs promoting AI porn apps, porn sites, and cryptocurrency scams; dozens more have been hit with a website redirection attacks which lead to animal vagina sex toy ecommerce pages, penis enlargement treatments, automatically-downloading Windows program files, and porn.

“Sex xxx video sexy Xvideo bf porn XXX xnxx Sex XXX porn XXX blue film Sex Video xxx sex videos Porn Hub XVideos XXX sexy bf videos blue film Videos Oficial on Instagram New Viral Video The latest original video has taken the internet by storm and left viewers in on various social media platforms ex Videos Hot Sex Video Hot Porn viral video,” reads the beginning of a three-page PDF uploaded to the website of the Irvington, New Jersey city government’s website.

The PDF, called “XnXX Video teachers fucking students Video porn Videos free XXX Hamster XnXX com” is unlike many of the other PDFs hosted on the city’s website, which include things like “2025-10-14 Council Minutes,” “Proposed Agenda 9-22-25,” and “Landlord Registration Form (1 & 2 unit dwelling).”

It is similar, however, to another PDF called “30 Best question here’s,” which looks like this:

Irvington, which is just west of Newark and has a population of 61,000 people, has fallen victim to an SEO spam attack that has afflicted local and state governments and universities around the United States.

💡

Do you know anything else about whatever is going on here? I would love to hear from you. Using a non-work device, you can message me securely on Signal at jason.404. Otherwise, send me an email at jason@404media.co.

Researcher Brian Penny has identified dozens of government and university websites that hosted PDF guides for how to make AI porn, PDFs linking to porn videos, bizarre crypto spam, sex toys, and more.

Reginfo.gov, a regulatory affairs compliance website under the federal government’s General Services Administration, is currently hosting a 12 page PDF called “Nudify AI Free, No Sign-Up Needed!,” which is an ad and link to an abusive AI app designed to remove a person’s clothes. The Kansas Attorney General’s office and the Mojave Desert Air Quality Management District Office in California hosted PDFs called “DeepNude AI Best Deepnude AI APP 2025.” Penny found similar PDFs on the websites for the Washington Department of Fish and Wildlife, the Washington Fire Commissioners Association, the Florida Department of Agriculture, the cities of Jackson, Mississippi and Massillon, Ohio, various universities throughout the country, and dozens of others. Penny has caught the attention of local news throughout the United States , who have reported on the problem.

The issue appears to be stemming from websites that allow people to upload their own PDFs, which then sit on these government websites. Because they are loaded with keywords for widely searched terms and exist on government and university sites with high search authority, Google and other search engines begin to surface them. In the last week or so, many (but not all) of the PDFs Penny has discovered have been deleted by local governments and universities.

But cities seem like they are having more trouble cleaning up another attack, which is redirecting traffic from government URLs to porn, e-commerce, and spam sites. In an attack that seems similar to what we reported in June , various government websites are somehow being used to maliciously send traffic elsewhere. For example, the New York State Museum’s online exhibit for something called “The Family Room” now has at least 11 links to different types of “realistic” animal vagina pocket masturbators, which include “Zebra Animal Vagina Pussy Male Masturbation Cup — Pocket Realistic Silicone Penis Sex Toy ($27.99),” and “Must-have Horse Pussy Torso Buttocks Male Masturbator — Fantasy Realistic Animal Pussie Sex Doll.”

Links Penny found on Knoxville, Tennessee’s site for permitting inspections first go to a page that looks like a government site for hosting files then redirects to a page selling penis growth supplements that features erect penises (human penises, mercifully), blowjobs, men masturbating, and Dr. Oz’s face.

Another Knoxville link I found, which purports to be a pirated version of the 2002 Vin Diesel film XXX simply downloaded a .exe file to my computer.

Penny believes that what he has found is basically the tip of the iceberg, because he is largely finding these by typing things like “nudify site:.gov” “xxx site:.gov” into Google and clicking around. Sometimes, malicious pages surface only on image searches or video searches: “Basically the craziest things you can think of will show up as long as you’re on image search,” Penny told 404 Media. “I’ll be doing this all week.”

The Nevada Department of Transportation told 404 Media that “This incident was not related to NDOT infrastructure or information systems, and the material was not hosted on NDOT servers.This unfortunate incident was a result of malicious use of a legitimate form created using the third-party platform on which NDOT’s website is hosted. NDOT expeditiously worked with our web hosting vendor to ensure the inappropriate content was removed.” It added that the third-party is Granicus, a massive government services company that provides website backend infrastructure for many cities and states around the country, as well as helps them stream and archive city council meetings, among other services. Several of the affected local governments use Granicus, but not all of them do; Granicus did not respond to two requests for comment from 404 Media.

The California Secretary of State’s Office told 404 Media: “A bad actor uploaded non-business documents to the bizfile Online system (a portal for business filings and information). The files were then used in external links allowing public access to only those uploaded files. No data was compromised. SOS staff took immediate action to remove the ability to use the system for non-SOS business purposes and are removing the unauthorized files from the system.” The Washington Department of Fish and Wildlife said “WDFW is aware of this issue and is actively working with our partners at WaTech to address it.” The other government agencies mentioned in this article did not respond to our requests for comment.

About the author

Jason is a cofounder of 404 Media. He was previously the editor-in-chief of Motherboard. He loves the Freedom of Information Act and surfing.

Jason Koebler

Deprecate Like You Mean It

Hacker News
entropicthoughts.com
2025-12-11 15:52:30
Comments...
Original Article

Seth Larson noticed that people don’t act on deprecation warnings . The response.getheader method in urllib has been deprecated since 2023 because the response.headers dictionary is what should be used instead. When the method was eventually removed, lots of code broke.


Deprecation warnings try to solve the fat step function associated with backwards-incompatible api changes, by allowing people to schedule the maintenance burden, rather than having it imposed on them suddenly all at once. The problem is the economic cost of waiting is not tangible. You can ignore the deprecation warning right up until the api change happens, and then it becomes very expensive to delay it further.

People aren’t great at planning for sudden changes.


What if we intentionally made deprecated functions return the wrong result … sometimes? Every time it intentionally returns the wrong result, it logs the deprecation warning. 1 Users that are very sensitive to the correctness of the results might want to swap the wrong result for an artificial delay instead.

Initially, it should never return the wrong result. But after it’s been deprecated for a few months, it should start to return the wrong result once every million invocations, say. That would probably not trigger anyone’s midnight pager, but it would make it clear that relying on the deprecated functionality is a bug lurking in the code.

Then after a few more months, turn it up to once every ten thousand invocations. It’s probably going to start to hurt a little to delay the maintenance. After a year, make it return the wrong thing once every thousand invocations. At this point, users can only delay maintenance if it’s an unimportant auxiliary usage. And finally, as we bump up into the deadline, it should return the wrong thing every other invocation. Now it’s practically useless, just like when it is removed.

This makes the deprecated parts of the api increasingly buggy until they’re removed, and makes the economic tradeoff of when to schedule the maintenance more immediate to users.

Crick and Watson Did Not Steal Franklin's Data

Hacker News
nautil.us
2025-12-11 15:47:48
Comments...
Original Article
In Body Image

Working in Cambridge, James Watson and Francis Crick discovered the double helix in early 1953, while Maurice Wilkins and Rosalind Franklin, researchers at King’s College London, were also trying to crack the structure. Franklin was about to leave King’s and DNA work all together, while Wilkins was preparing to focus his mind more closely on the problem once Franklin left. It’s widely believed that Watson and Crick stole Franklin’s data and that this enabled them to make their breakthrough.

The idea can be traced back to Watson’s page-turning but unreliable memoir, The Double Helix , in which he describes seeing X-ray diffraction images at King’s in January 1953 and feeling excited about them. He does not say who made those images (although he does say that Wilkins had been repeating some of Franklin’s observations), but most people believe that this was one of Franklin’s images despite a lack of reliable evidence for this.  Even if the image had been so decisive, surely Franklin—an expert—would have realized this herself.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

With Nathaniel Comfort (who is writing a biography of Watson), I discovered that in January 1953, Franklin suggested Crick talk to a colleague, who had an informal report of the work she and Wilkins were doing at King’s, if he wanted to learn more about her findings. There is no indication that she was concerned about sharing her results.

In Body Image
CLOSE READING: Matthew Cobb says that the now widespread assumption that Francis Crick and James Watson stole Rosalind Franklin’s data to make their momentous discovery about the double-helix structure of DNA is based on non-existent evidence and isn’t borne out by the rest of the facts. Photo by Chris Schmauch.

Interviews with Crick from the 1960s and a close reading of the Watson and Crick research papers show that the actual process of making the breakthrough did not involve using any of Franklin’s data. Instead, the pair spent a month fiddling about with cardboard shapes corresponding to the component molecules of DNA, using the basic rules of chemistry. Once they had finally, almost by accident, made the discovery, then they could see that it corresponded to Franklin’s data.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Franklin was not hostile to the pair—she continued to share her data and ideas with both men and subsequently became very close friends with Crick and his wife, Odile. She regularly stayed at their Cambridge home, went to their notorious parties and went to the theatre with Odile. Later, after her cancer diagnosis, she convalesced with the Cricks, twice. There are some charming letters from Odile that I quote in the book, describing their friendship.

In Body Image

In 1947, Crick set out his twin ambitions—to understand the nature of life, and of the human brain. With his work on molecular biology, he made huge strides toward achieving that first ambition; in 1977 Crick settled in California, working at the Salk Institute, with the aim of understanding consciousness. Although he made no single decisive breakthrough—we still do not understand how consciousness works —he played a decisive role in creating modern neuroscience.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

First, he used his reputation and influence to argue for a focus on precise anatomy—the origin of today’s huge projects to map animal brains and, eventually, the human brain.

If the image had been so decisive, surely Franklin would have realized this herself.

Then, in the 1980s he was closely involved with the cognitive scientists and computer scientists who developed something called Parallel Distributed Processing—the distant precursor of today’s AI systems. He argued for fusing computer models of behaviour with precise anatomical knowledge to gain insight into how nervous systems work, and collaborated closely with AI pioneers Geoffrey Hinton and John Hopfield , who in 2024 shared the Nobel Prize for their work.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Above all, working with Christof Koch , he set out a materialist approach for investigating consciousness—much of today’s interest in the topic can be traced back to Crick’s pioneering advocacy and insight.

Although this contribution has largely been forgotten, before the turn of the millennia, his role was widely recognized. He regularly published articles and think-pieces in Nature , the leading scientific journal, and in 1994 he wrote a hugely successful popular book about his ideas, The Astonishing Hypothesis , which helped to shape the thinking of both scientists and the public about the nature of consciousness.

In Body Image

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

One aspect of Crick’s life that even his close collaborators did not know of was his fascination with poetry. He even tried his hand at writing verse—some of it was dreadful, but other attempts, quoted in my book, were pretty good.

His closest relationship with poetry came with the work of Michael McClure. In 1959, Crick bought a copy of one of McClure’s early works— Peyote Poem —in City Lights Bookstore in San Francisco. This described the psychedelic effects of chewing peyote; Crick did not know what this was, but he was struck by McClure’s writing and pinned the long poem in the hall of his Cambridge home.

In the early 1970s, he got a chance to meet McClure, who was by now quite well known—“the Prince of the San Francisco poetry scene,” said one observer—and the two struck up a close friendship (Crick had also experimented with LSD by this point). McClure would send Crick early versions of his poems and his essays; Crick would give his opinion, which McClure sometimes accepted, changing his work as a result. Their letters—scattered in archives around the world—reveal an intimate and unusual friendship.

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

For the next three decades the two men exchanged visits and letters, which reveal an emotionally charged, subjective side to Crick that might seem to contradict his materialist approach to science. But, in fact, his approach to science was not strictly logical, but full of fun and sudden, intuitive glimpses into facets of reality that were previously hidden. Understanding his interest in poetry, and in McClure’s work in particular, sheds light on Crick’s character and on his science.

Enjoying Nautilus ? Subscribe to our free newsletter .

Lead image: Maryna Olyak / Shutterstock

ADVERTISEMENT

Nautilus Members enjoy an ad-free experience. Log in or Join now .

  • Matthew Cobb

    Posted on

    Matthew Cobb is a professor emeritus at the University of Manchester. He earned his Ph.D. in psychology and genetics from the University of Sheffield. He is the author of seven books including: Crick: A Mind in Motion, As Gods: A Moral History of the Genetic Age, The Idea of the Brain, and Life’s Greatest Secret . He lives in England.

The Colonization of Confidence

Lobsters
sightlessscribbles.com
2025-12-11 15:45:39
Comments...
Original Article

There's a texture to grief very few examine. People talk about the loud grief—the kind of grief that shatters your soul and rattles your cage of will—but there's another kind of grief that's rarely explored. Watching and listening to your friends losing who they are.

I think the climate is foreshadowing for my life. The bitter cold of the winter is a warning sign I don't pay attention to as I sip hot herbal tea at my writing desk, vanquishing my writers block and attempting to put some literary order back into the chaotic rhythm of my thoughts.

A phone notification shakes me out of my thoughts about how I can stretch out two men taking their clothes off for about five hundred words. It's Leo. He's texted me.

"Hey Rob," he says, "Did you forget about the writing group today?"

I did, but I can't tell him that.

"Nope! Just getting ready!"

When he makes it to my apartment, there's an email notification from my phone. I don't have time to read it at the moment, and oh, I am a fool for not stopping to check what that email is.


The air in the conference space where our "Writers of the Future" group meets always smells of harsh cologne and performative productivity. It is a sensory assault of laptop fans and the frantic tapping of laptop keys. I sit in the corner, my cane hooked over the back of my chair, listening to the room.

I hate the name. Writers of the Future sounds like a corporate slogan for a pesticide company, but the name is less important than the people within it, and that's why I'm here.

Leo sits next to me. I can hear the nervous rhythm of his breathing. Leo is a Black writer with a voice that sounds like jazz—unpredictable, syncopated, full of unexpected dissonances that resolve into heartbreaking chords. He writes about his life, the specific, humid weight of a Texas summer, the way joy can taste like a cheap freeze-pop on a Tuesday.

"I have something new," Leo says. His voice is tight. "But... it's rough. It's really messy."

"Messy is good," I say, leaning in. "Messy is where the blood is."

Leo is a writer of immense, jagged power. He is a powerful man with a voice that sounds like gravel crunching under heavy tires, deep and resonant and full of a history that refuses to be smoothed over. When he reads his work aloud, the air in the room changes pressure. His prose is not "clean." It does not flow like water; it flows like molasses, thick and sweet and slow, or sometimes like lava, burning everything it touches. He tells more than he shows, a stylistic choice that critics hate but which I find profoundly honest. He doesn't invite you to watch; he commands you to listen.

And I love everything he writes.

Today, the writers in this group aren't buzzing with typing. They're listening to Chad, a white tech bro that sounds like his larynx is constantly massaging his speech for a pamphlet instead of talking to people.

Chad isn't a writer. He's a content generator. He's published 400 generated books on Amazon and won't stop—not to mention—the insessant bragging he does about how he's a better writer because of whatever LLM he's using today. He is the kind of person who listens to podcasts at 2.5x speed because he believes silence is an inefficiency to be eliminated. He works in "Prompt Engineering," a job title that sounds to me like "Assembly Line Foreman for the Dream Factory."

Chad's voice is smooth, frictionless, possessing the terrifying cheerfulness of a customer service bot that cannot be turned off. "Claude just released an update. It's incredible for unblocking. It really smooths out the edges."

He's explaining to the group how to get the most out of large language models. Leo, beside me, is fidgeting with as much annoyance as I am. We're both immensely relieved when Brad finally claps his hands together, as if he's about to announce a press release.

"Okay," says Brad. Brad is the organizer. His voice is a rich, polished baritone that vibrates in his throat but never seems to reach his chest. It's the voice of a podcast host who sells mattresses between segments on mindfulness. "Leo, you're up. Did you bring the revision of The Asphalt Hymn ?"

"Yeah," Leo says,

"Great! Alright, creators," Brad announces. The word 'creators' sounds slippery in his mouth, like he's selling a subscription service. "Let's synergize the workflow tonight. Go on, Leo."

I have to suppress the urge to violently groan about his word choice. Instead, I turn my head towards Leo. I can't wait to hear this.

Leo reads. It's a piece about his grandmother's kitchen. He describes the smell of collard greens as a heavy, green blanket that wrestled the air into submission. He describes her laugh as a rusted hinge that still worked perfectly.

It is beautiful. It is jagged. It stops me cold.

When he finishes, there is a silence. Not the reverent silence of a shared emotional impact, but the uncomfortable, shifting silence of a boardroom that has just been shown a graph with a downward trend.

"It's... interesting," Chad says. "But the pacing is a little weird. And that metaphor about the hinge? It kind of pulls you out of the immersion. It's not very smooth."

"It's not supposed to be smooth," I snap. "It's supposed to be true."

"Leo," Chad says, choosing to ignore me entirely—he hates me just as much as I hate him—and addresses Leo directly. "Leo, did you read what I sent before this meeting? I took your last draft and ran it through Claud."

The fuck? I roar internally.

"I asked it to optimize for flow and readability. Listen to this."

Chad clears his throat. The sound is grating. It sounds as if he's optimizing how to speak to humans. He reads from his screen.

"Her laugh was a joyous sound, ringing through the house like a silver bell. The kitchen always smelled delicious, the aroma of greens filling the air like a warm hug."

The words hang in the air like dead flies. They are perfectly grammatical. They are perfectly structured. And they are completely, utterly stripped of everything that is Leo. A silver bell? A warm hug? It is the language of a Hallmark card Leo would never write written by a sociopath.

"See?" Chad says, triumphant. "It just flows better. It's more... accessible. You should maybe use that as a base, Leo. Clean up your edges."

I wait for the room to revolt. I wait for the other writers to laugh Chad out of the shop. But they don't.

"Actually," says Amber, a poet who used to write searing verse about heartbreak, "that does sound a lot more professional, Leo. The 'silver bell' image is really classic."

"Yeah," says Mike. "It's less... abrasive."

I feel Leo shrink beside me. I can sense the physical collapse of his posture, the way he folds in on himself.

"I guess," Leo whispers. "I guess I was trying too hard with the hinge."

"No," I say, and my voice is the low growl of a guard dog. "The hinge was perfect. The bell is a lie. Leo, that machine didn't improve your writing. It erased your grandmother."

"You're just resistant to the tools, Robert," Chad sighs. "It's the future. Adapt or die."

"Oh, go power down somewhere. This isn't adaptation," I say, gripping the edge of the table until my knuckles ache. "It's lobotomy."

Nobody agrees with me, though. I am outcast in a gaggle of readers and writers that seem to prefer LLM writing to the jagged edges.

"And this is why," Chad practically yells, "why you don't listen to Luddites! Work smarter, not harder," Chad beams. The smile in his voice is audible, a stretching of wet skin. "That's the future. Why churn butter when you can buy margarine, right?"

I sit there, gripping my cane until my knuckles pop, feeling the beginning of a horror story that has nothing to do with ghosts and everything to do with the slow, methodical erasure of a human soul.


Part 2.

I walk home, my cane tapping a furious rhythm against the concrete.

I fucking hate them.

I fucking hate the Tech Bros. I hate the hype. I hate the Bros wrongly claiming LLM's will turn us all into toast. I hate their never-ending quest to make their investments have a return. I hate the venture capitalists in their Patagonia vests who talk about "disruption" while they burn down the library of human experience and fuck over workers. I hate them with the specific, intricate hatred of a survivor who knows exactly how the grift works.

I hate LLMs. My hatred knows no bounds. I love the small web, the clean web. I hate tech bloat.

And LLMs are the ultimate bloat.

They didn't build these things to help us. I know this.

Why do LLMs exist? They exist to harm workers. They say it's to "democratize creativity." Bullshit. You don't democratize creativity by automating the act of creation. You democratize it by funding arts education, by supporting libraries, by paying writers a living wage.

No, they created LLMs to solve a supply chain problem.

The "problem" was that creating art—real, human, meaningful writing—is slow. It is expensive. It is unpredictable. And it is diverse. It requires dealing with people . People with traumas, people with political opinions, people with voices that don't fit into a corporate style guide. Minority writers, specifically, are "high friction." We talk about queerness and transphobia and racism, and We talk about disability. We make the advertisers uncomfortable.

So the Tech Bros, in their infinite mediocrity, decided to bypass the human element entirely. They built a machine that scrapes our work—our pain, our joy, our very souls—without consent, grinds it into a mathematical slurry, and extrudes it as a flavorless, inoffensive paste that can be sold by the bucket.

They built a machine to gentrify the English language.

And the horror of watching my friend lose his soul almost eats me alive.


The next week, Leo brings a draft that was written 90% by him. The justifications are quiet, unsure of his own skill.

The week after, he brings a piece that's written 50% by him. The shame is palpable, but it hurts me even more when readers and writers say the LLM versions are always better than his old drafts.

This week, he stuns us all.

"I don't—I don't have a draft," he admits. I immediately take his hand among the gaping and gawking of the room. His skin feels like it's a hallow shell of his soul. His voice is strained, broken, like there's something that's fundamentally shattering inside him nobody can see.

"I just—I just—I haven't written in a while," he says. Claude has been down. I tried to write something… and, and… it just… I can't write. I ain't got nothin'."

Chad chooses that exact moment to colonize the conversation.

"Leo sent me his last draft," Chad announces.

I freeze. NO!

"And," Chad continues, "I ran it through the new 'Literary Fiction' AI I've been working on. Listen to the difference."

He reads Leo's original sentence. I know it is Leo's because it has teeth.

and then, he gets to the LLM version.

The group murmurs. "Oh, that flows so much better," someone says. "It's more... universal."

"Universal means average," I snap, wishing I could get away with prompt engineering Chad's stupid LLM to self destruct. "It means it touches no one because it tries to touch everyone."

"Robert," Chad sighs. "Always the contrarian. Look, the metrics don't lie. Readers prefer the second one. We did A/B testing on Wattpad. The second one had a 40% higher retention rate."

"You are feeding them sludge," I say, my voice rising. "And because they are starving, they eat it. And then you tell them sludge is what food tastes like."

"It's just writing, Rob," Chad says, his voice hardening. "Why do you have to make it a crusade? Leo likes it. Don't you, Leo?"

We all turn to Leo. Even though I can't see his face, I feel the weight of the room pivot toward him.

"I..." Leo's voice cracks. A dry, brittle sound. "I don't know."

"Leo," I say gently. "Tell him you are a good writer that doesn't need fixing."

"It's better," Leo whispers.

The words hit me like a physical blow. A kick to the shin.

"What?" I ask.

"It's better," Leo says, louder this time, but with a hysterical edge, a vibration of pure panic. "He's right, Rob. My metaphors... they're confusing. People don't get them. The machine... it makes me sound smart. It makes me sound like a real writer."

"You don't need an LLM to write for you. You're a good writer," I say, but his voice cracks as he stands up, snatches his coat, and practically bolts out of the room.

"I can't write anymore. I can't write anymore. I just—I can't."

I chase after him but it's too late. He's already out of the building.


The unraveling doesn't happen all at once. It is a slow rot, a decomposition of confidence that the Germans call Zersetzung. Psychological decomposition. It was the method the Stasi used to break dissidents not with torture, but with gaslighting, with subtle alterations of reality until the subject lost faith in their own mind.

The Tech Bros haven't invented the term, but they have automated the process. They have built a machine that weaponizes mediocrity and sells it as perfection.

Two weeks later, I sit in Leo's apartment. The air is heavy with the smell of stale laundry and despair. I have brought cookies—oatmeal raisin, heavy on the cinnamon, baked until the edges are crisp and the centers are dense and chewy. Food is a language of gravity; it pulls you back to earth.

"Eat," I say, pushing the container across the table.

Leo doesn't move. "I got into Fiction Magazine ," he says softly.

"Leo! That's huge!" I reach out, finding his forearm. His muscle is tense, rigid as wood. "Why do you sound like you're at a funeral?"

"Because I didn't write it, Rob."

"You used the LLM again?"

"I used it for the whole thing," he whispers. "I fed it my draft. My messy, ugly, jagged draft. And it spit out... it spit out this smooth, perfect thing. And the editors loved it. They said it was 'refreshing.' They said the prose was 'elegant.'"

He pulls his arm away from me. I hear the frantic clicking of keys.

"Listen to this email," he says. His voice sounds like he's chewing glass as he reads it to me.

"Dear Leo, we were impressed by the fluidity of your metaphors. Usually, your work feels a bit... disjointed. But this piece sang. The line about the 'tapestry of fate' was particularly moving."

"Tapestry of fate," Leo spits. The words sound like poison in his mouth. "I hate that phrase. I would never write that phrase. It's a cliché. It's slop. But they liked it better, Rob. They liked the machine better than me."

"They liked the conditioning," I say, my voice sharp. "They liked the familiarity. It's the linguistic equivalent of McDonald's fries. It's chemically engineered to be palatable, but there's no nutrition in it. It passes through the brain without leaving a mark."

"Maybe I'm just bad," Leo says. The sentence hangs in the air, heavy and wet.

"You are not bad. You are a jazz musician in a world trying to sell ringtones."

"But Robert, I can't do it anymore," Leo says, and then he breaks. It isn't a loud sob. It is a continuous intake of breath, a collapse of the chest. "Fuck, I tried to write this morning. Just me. Just a blank document. And I stared at the blinking cursor, and every sentence I thought of felt... weak. It felt amateur. I wrote, 'The rain hit the roof like gravel.' And then I thought, no, the AI would say something better. And I deleted it. I deleted it all."

I'm about to say something when he continues,

"I feel so stupid, Rob," he chokes out, the words drowning in his tears. "I feel so stupid. I let it into my head and now I can't get it out. I look at my own thoughts and they look wrong. They look like errors."

I stand up and walk around the table, navigating by the sound of his breathing. I find his shoulder and pull him into an embrace. He clings to me for dear life, his hands clutching my body as if I'm his only life raft at sea.

"That is the trap, Leo. That is the psychological warfare. They are selling you a solution to a problem they created. They want you to feel insecure. If you feel insecure, you pay the subscription. They are strip-mining your confidence to sell you back a synthetic version of it."

"I'm losing money!" he wails. "I'm losing money, I can't write anymore—and readers—readers love my LLM writing! I don't know what to do, Rob! Readers like it. Editors like it. Editors. Editors of Magazines. I can't write anymore! I just—what the hell is happening to me—why do readers like it?"

"I don't know," I answer honestly, rocking him, trying to comfort his soul with my words. "But just because people want to eat McDonalds, that doesn't mean we need to stop home cooking. But I don't know what to do. I'm just one person, but Leo, I love you, okay? I'll always be here if you need me. If you need me to look over a draft or—"

"It's not just the subscription," he weeps. "It's the readers. They're getting used to the slop, Rob. They're getting used to the smooth edges. My writing... my real writing... it feels like it has too much friction now. Even the editors. Even the people who are supposed to know better."

"Friction is where the heat comes from," I say fiercely. "Friction is where the life is."

But he can't hear me. The Zersetzung is working. He is taking himself apart, piece by piece, and replacing the parts with synthetic fillers because the world has told him his own parts are defective.


Days later, it's my turn. we're back at that fucking writers' group. Although, today, it feels like it's a uniquely crafted torture chamber.

I decided to read something. I usually don't. I keep my work for my blog, for the people who understand that a screen reader isn't a constraint but an instrument. But I needed to show Leo that imperfection was power. I needed to prove that raw data hits harder than processed data.

I read a piece about my mother. It is raw. It is angry. It describes the sound of her voice as "a serrated knife cutting through warm butter." It describes the smell of the trailer park as "wet cardboard and ambition gone sour." It is jagged. It has no tapestries. It has no symphonies. It is a piece of writing that demands you look at the ugly thing and call it by its name. It's imperfect, but it's everything me.

When I finish, the room is silent.

"Interesting," Chad says. The word is a dismissal. "Very... visceral. But Rob, my guy, it's a little hard to follow. The sentence structure is all over the place. And 'wet cardboard'? It's a bit gross, isn't it?"

"It's the truth," I say.

"Truth doesn't scale," Chad says. He taps on his keyboard. "Just for fun, I ran your first paragraph through the new GPT-fiction-Plus wrapper I'm beta testing. Just to see what it could do with the core idea. I prompted it to 'elevate the tone' and 'smooth the syntax.'"

"Don't," I warn. My voice drops—calm, dangerous, flat. "Do not run my life through your blender."

"Too late," Chad says. "Listen to this improvement."

He clears his throat, loving this. I feel like I'm being punctured by a thousand arrows through the heart.

When he finishes, Chad beams. I can hear the smile in his voice. "See? Same info, but now it's palatable. Now it's content . It took out the aggression. It leveled the tone. It's objectively better writing."

A woman in the front row murmurs, "Oh, that is much nicer. It feels more... literary."

"Exactly!" Chad says. "I fixed it."

"Chad," I say, my voice calm, the voice of a teacher correcting an unwilling student. "Why did it choose 'stone' for the throat metaphor?"

"What? Because it's intelligent. It's smart. It knows. Because it fits."

"No," I say. "It chose 'stone' because statistically, in the petabytes of training data scraped without consent from the internet, the word 'stone' appears in proximity to 'lump in throat' with a probability of 0.04 percent, which is higher than 'wet creature.' It isn't a choice. It's a math problem. It is a predictive text algorithm on steroids. It doesn't know what a throat is. It doesn't know what fear feels like. It is predicting the next token based on mediocrity."

"It's still better writing," Chad insists, but his voice vibrates with the anger that someone who he thinks is a technophobe knows this tech better than he knows the tech.

"It is a hallucination of competence," I say. "Chad, asking that machine to improve writing is like asking a blender to improve a salad. You don't get a better salad. You get sludge. And you," I turn my head to sweep the room, "you are all cheering for the sludge because it's easy to swallow. You are letting this tech bro convince you that your own distinct flavors are defects."

I hear a sound next to me.

It is Leo. He is making a sound like a wounded animal trying to stay quiet. A high, thin whine in the back of his throat.

"You… fixed it," Leo whispers.

"See?" Chad says to me. "Even Leo gets it."

"No," I say. I stand up, unfolding my cane with a snap that sounds like a gunshot in the quiet room. "You didn't fix it. You lobotomized it. You took a scream and turned it into elevator music. You took the specific, painful texture of my life and turned it into a generic stock photo."

"Whoa, calm down, Luddite," Chad laughs. "You're just jealous the machine has better flow than you."

"I am not a fucking technophobe!" I shout, and the room flinches. "I built my own website stack from scratch! I code in raw HTML! I know more about the architecture of the internet than you and your wrapper-scripting ass ever will! I love technology! I love tools that expand human capacity! This?" I point my cane at his laptop. "This isn't a tool. This is a replacement. This is a parasite. It doesn't expand us; it eats us. It eats our confidence. It eats our specificity. It eats our struggle."

I turn to Leo. "I'm a better writer than this—any LLM. So are you, Leo! You're a better writer."

Leo doesn't move. He stands up, sad, resigned. He grabs his coat and walks out. I quickly tap my way out to follow him.

"Rob," he says, stopping to turn back, facing my fury. "Rob, I wish I was as good of a writer as you. I'm a shit—"

"Leo, no. You're a good writer—"

"I can't write, Rob!" Leo screams. It is a terrifying sound, a man ripping his own throat open. "I feel so stupid! I look at my words and they look like trash! I can't do it! I'm done! I hate everything I do without the AI!"

I have no idea what to say. Leo comes closer, his hands grasping mine. The force of his grip breaks my heart.

"Rob," he says, and his voice shakes with tears. "I hate this. I hate this."

"I know, buddy. I know! I'm here."

As I let go for now, I don't know what to do to help him since my writing exercises didn't work. I told him to write fun Fanfiction. I told him to just write something bad. It's okay. I told him he didn't have to be marketable, but nothing is working, and he hates his own writing more as the days pass.

I know Leo can still write without an LLM. He just needs encouragement. This is the thought that plants another seed as I leave the building. I can't go home yet.


Part 3.

I take a bus to the other side of the city, to a neighborhood that smells of pupusas, exhaust fumes, and resilience. I walk three blocks, absently counting the cracks in the sidewalk through the tip of my cane, until the acoustic landscape changes. The echo of the street vanishes, absorbed by walls of paper.

I eventually make it to The Cat's Shelf. The bell above the door doesn't ding; it clatters, a sound of heavy brass against wood. The air inside is cool and smells of vanilla (from the degrading lignin in old paper), binding glue, and peppermint tea.

Entering it is a sensory baptism. The floorboards groan under my feet—a specific, B-flat groan that tells me exactly where I am in the room. There is no ultrasonic hum of cooling fans here. There is the rustle of pages and the hushed murmuring of conspiracy.

"Rob?" Sarah's voice comes from behind the counter. Sarah sounds like she looks—sturdy, worn, and warm. She is a trans woman who built this bookstore with her bare hands and a lot of crowdfunding. She understands the politics of space.

"I need a favor," I say. "A big one."

"For you? Always. Do you need an audiobook or a body buried?"

"I need a router," I say. "And I need a space."

I explain it all. The Tech Bros. Chad. The Zersetzung. Leo's disintegration. The horror of watching a friend scrub the humanity out of his work because a machine told him he was inefficient. I tell her about the magazine editors who preferred the lie.

Sarah listens. She pours me tea. It's an act of love I didn't know I needed.

"you're not being hysterical, Robbie. They're colonizing the imagination," Sarah says softly. "That's what it is. Gentrification of the mind. They price you out of your own creativity."

"Yeah. That's why I want to make a space, a space where that can't happen. I want to start a new group," I say. "Here. After hours. But I have rules. No LLMs. Not just discouraged—blocked. I want to configure your router to sinkhole every request to OpenAI, Claude, Anthropic, all of them. If they try to connect, I want the browser to redirect to an HTML page that has a very judgmental cat purring at you."

"I love it," Sarah says. "When do we start?"

"Tonight," I say. "I can't go home. I need to bring my friend back."


We call it "The Drafty writer's group." It's not the best name I've ever invented, but I don't care.

The first meeting is small. Just me, Sarah, and a non-binary poet named Ash who writes sestinas about urban decay. We sit in a circle on mismatched chairs. I bring cookies—chocolate chip with sea salt, the contrast of sweet and sharp designed to wake up the palate.

I don't invite Leo yet. He isn't ready.

But word spreads. I don't use social media with an algorithm. I use the vast power of word of mouth. We print physical zines—little folded pieces of paper that smell of toner—and leave them in coffee shops, record stores, and community centers. Any place that will let us talk about the space.

Tired of the Slop? Come write poorly with us.

Recovering Prompters Welcome.

We don't want your polished draft. We want your mess.

By the third week, we have seven people.

One of them is Adam. Adam is a Trans Black man with a voice that is soft, almost a whisper. He sits in the corner for the first hour, just listening to the sound of pens scratching on paper and keys clattering, phone keyboards clicking, the rhythmic clacking of my mechanical keyboard.

"I... I used to use it too," Adam says suddenly during the break.

The room goes still. Not the hostile silence of Chad's group. A holding silence. A waiting silence.

"I wrote a story about my transition," Adam says. "And I showed it to a friend, and he said it was 'too angry.' So I put it in ChatGPT. I told it to make it 'more universal.' And it did. It took out the anger. It took out the specific smell of fear. It took out the things that made me, me. It made it... nice. And I hated it. But I couldn't stop. Because every time I tried to write the anger back in, I felt like I was doing it wrong. I felt like the machine knew better how to be human than I did."

"You weren't doing it wrong," I say. "You were doing it human. The machine averages out humanity until it's just a beige paste."

"I want to write the anger back," Adam whispers. "But I'm scared. I'm scared it's going to be bad."

"Let it be bad," Sarah says from the counter. "Be bad. Be furious. Be unintelligible. Just don't be artificial."

"We call that being a Recovering Prompter," I say. "There's no shame in it here. We know the pressure. We know the addiction of the easy fix. But we're here to do the hard work."

Adam picks up his pen. I hear the scratch, hesitant at first, then harder, faster, digging into the paper until I think it might tear. Encouragement and love flood the space.

There's still one person missing, though.

I send Leo an email and a text message, and yes, an actual letter.

Leo,

I made a writing garden. It's full of weeds. It's messy. Nothing is polished. It's not perfect.

We have cookies.

I miss your voice. Not the LLM one. The one that shakes the floor.

Come home. You are always welcome. I miss you. Come to one group, just one. We need you.

- Rob

He shows up twenty minutes late. I hear the bell clatter. I hear his heavy, hesitant steps. He smells of old sweat and that specific, acrid scent of anxiety.

"I didn't bring anything," he says, standing in the doorway. "I haven't written a word in two months, Rob. I'm empty. The well is dry."

"You're not empty," I say. "You're just quiet. Come sit."

He sits next to Adam.

"This is Leo," I tell the group. "He's the best writer I know. He's just forgetting how the instrument works."

"I'm not the best," Leo mutters. "I'm a fraud. I sent a story to a magazine that a robot wrote, and they paid me for it. I cashed the check, Rob. I spent the money."

"And you're here now," Adam says. "That's the work."

That night, Leo doesn't write. But he eats a cookie. And he listens. He listens to Ash read a poem that doesn't rhyme and has a stanza that is just a scream. He listens to Adam read a paragraph that is so angry it feels like heat radiating from a furnace.

And for the first time in months, I hear Leo breathe. A real breath. Deep into the diaphragm.


The resistance grows.

It isn't a war fought with DDOS attacks or angry tweets or furious posts about how AI sucks. It is fought with vulnerability. It is fought by local businesses putting our flyer in their windows. It is fought by people telling their friends, "Hey, there's a place where you don't have to be perfect."

Weeks bleed into a month. We start talking of putting on a live literature event. None of us have money, but none of us care. We want to do this.

As we plan, people come to just listen. readers arrive to just enjoy writers being messy on the page. Other writers that are recovering prompt engineers come to our space. They all write poorly while they eat and drink, laughing at lines, embracing hugs, feeling far less alone than when they walked in the door.

As the night of our first live reading is under way, just before the event, three people walk in who smell of expensive leather jackets and throat lozenges.

"Is this the place?" a woman asks. Her voice is incredible—rich, trained, with perfect diction but a warmth that feels like velvet. It's a voice that knows how to hold a listener.

"Who's asking?" Sarah says.

"My name is Britany," the woman says. "I'm an audiobook narrator. This is David and Sam. We... we heard about what you're doing from the guy who runs the falafel stand down the street. We would love to read for anyone with disabilities or social anxiety or who just don't want to read work themselves."

"that's incredible!" I say, unable to hide my joyous shock, "but we can't pay you. We have a lot of cookies and anger but that doesn't translate into cash, and we didn't get enough donations to pay you all."

Brittany laughs. It is a musical sound, telling me everything is fine with the world. "Honey, I don't want your money. Do you know what I've been reading for the last six months? AI-generated litRPG novels. Thousands of pages of slop. I am dying of thirst. I am drowning in slop. I sit in my booth, and I read sentences that mean nothing, written by nobody, for an algorithm."

"We never seen a space for recovering prompt writers. This is such a cool idea. We want to lend our skills. We want to help. We heard there was grit here," David says. His voice is a deep baritone. "We heard there were jagged edges. We want to read them. For free. Just let us chew on something real. Let us narrate something that's growing into itself."

"We have plenty of jagged edges," I say. "Welcome to the writer's garden."


The Live Lit event was meant to be a small showcase. It turns into a riot of joy. We didn't shy away from the fact that recovering LLM prompters were going to be performing tonight.

Live readings by recovering humans. Show starts at eight PM.

The flyers catch the most attention. Sarah can't even fit everybody in the bookstore.

The alleyway is packed. I can feel the body heat, a wall of humidity and anticipation. It smells of rain, cheap beer, and electricity. There are people here from the neighborhood, people from the university, people who just sound tired of being sold things.

I walk up to the microphone. I tap my cane against the stand to gauge the distance. The hum of the PA system is dirty, crackling with interference. I love it. It sounds like potential.

The microphone hums and pops before me. None of us are audio technicians, and it shows. It's the most goddamn beautiful thing in the world. It's the sound of electricity trying to speak.

"Thank, uh, thank you all for coming," I say, and the microphone squeaks with a high-pitched feedback whine that makes a few people wince. I smile. I love that whine. It means the gain is too high. It means we are pushing the limits.

"You might notice our manifesto on the door," I say, my voice booming through the cheap PA system. "These writers... some of them have used LLMs before. Or what the venture capitalists call 'AI.' I think we can all agree, fuck AI, right?"

A cheer goes up. It's ragged and loud.

"But," I continue, lowering my voice, leaning in until my lips almost brush the metal grille, "I believe there should be a space for people that have been broken by the system. There will always be people that love whatever slop the LLM makes. There will always be those readers—and God help them, the editors—that prefer the LLM versions, the slop, the generated works that slide down your throat without you ever having to care about the artist behind it."

I pause. I listen to the silence. It is a deep, attentive silence. The kind of silence you find in a forest, not a packed room.

"Tonight, you'll hear grit," I say. "You'll hear mess. You'll hear first drafts. You'll hear bleeding onto the page. Nothing will be polished. Nothing will be sanitized. I admit, some things won't make a lick of fucking sense. But you all are here because you love the mess. You love artists. You love art. You love the chaos of a human mind trying to explain itself to another human mind."

I can feel the energy in the room shifting. It is a wave of warmth washing over me.

"We didn't tell you which writers are recovering LLM users," I say, my voice thickening with emotion. "Because they deserve better than to be hated. Psychological harm and warfare isn't easy to recover from. When a machine tells you your soul is inefficient, it takes a long time to stop believing it. But none of the pieces were generated by an LLM tonight. Not one word."

I grip the mic stand harder.

"So enjoy the recovering writers. Have some cookies—Sarah made them, and I can confirm by tactile inspection they are excellent. Be a part of the readers and listeners that listen to and appreciate art instead of just consume it like content pigs at a trough."

Laughter ripples through the room.

"If you don't want to support any of these writers for fear all of them used an LLM at some point, I understand. Trauma leaves scars. But there's information on how to donate to these writers in the front, and on the bookstore's website. Thank you for giving everyone a chance. Thank you for saying fuck you to the way others want you to be. Thank you for taking a chance on artists."

I take a breath. I can smell the ozone of the amp and the sugar of the cookies.

"Now... let's have some fucking fun!"

I step back. The applause is deafening. It isn't polite golf claps. It is stomping feet. It is whistles. It is the sound of a community knitting itself back together.

With every artist, the applause increases in volume. The audience participation is utterly stellar. There's whoops. There's cheers. There's exclamations of shock and wracked sobs and guttural laughter that spills out into the open space.

Nobody leaves. Everyone stays. Finally, it's time.

"and our final artist for tonight," Sarah announces, "is Leo."

The stomping grows louder.

I hear Leo walk to the mic. His footsteps are heavy. Solid. The footsteps of a man who remembers he has weight.

He clears his throat. It is a wet, nervous sound.

"This is... this is a first draft," Leo says. His voice shakes, just a little. "It has typos. I didn't run it through a spellchecker. I definitely didn't run it through Claude. And... for a while, I thought I couldn't write without it. I thought my voice was broken. But my friends told me broken things still make sound."

Laughter. Warm, supportive laughter.

"It's called The Taste of Love ."

He begins to read.

It's the best thing I've ever heard. It's about his momma cooking for him in her kitchen. There are stumbling blocks. Some metaphors swerve a little. Everything is Leo again. It's the best story I've heard from him in months, even with its imperfections.

Britany, the audiobook narrator, is standing next to me. I hear her inhale sharply, a sound of pure appreciation as the scene unfolds before our mind's eye. His words grow in confidence as he dives deeper into his work. Everyone is engrossed.

"He's good," she whispers.

"He's real," I whisper back.

Leo continues reading. He describes the sound of a screen door slamming in the wind. He describes the texture of his grandmother's hands—"rough as unfinished pine." He doesn't try to hide his tears when he speaks of her clothes. He doesn't hold back. He doesn't try to change any part of his mess.

He builds a world out of friction.

When he finishes, there is a second of silence—that profound, heavy silence that happens when a truth has landed in the room.

Then, the world explodes.

The applause isn't just hands clapping. It's yells. It's people roaring. It's whistles. People stomp their feet. I swear, I think the sound rips a hole through time and space. The wooden floor vibrates so hard it travels up my cane and into my marrow.

The applause is a physical assault of love. I hear Leo sob, just once, a sharp intake of breath over the microphone, before he is drowned out by the noise.

I feel a hand on my shoulder. It is Sarah.

"We did it, Rob," she shouts over the roar. "We built this—this night—we did it!"

I lean back against the brick wall, feeling the rough texture snag against my jacket. I listen to the chaotic, unpolished, beautiful noise of human beings screaming for a story that has jagged edges.

The Tech Bros can keep their tokens. They can keep their scale. They can keep their LLM modules.

We have the love. We have the care. And we have the voices.

And that, I realize as the tears unapologetically spill onto my cheeks, is something they can never code.


If you enjoyed this story, support artists and art. But for real, if you enjoyed this story, you might enjoy Frindle by Andrew Clements

Application Logging in Python: Recipes for Observability

Lobsters
www.dash0.com
2025-12-11 15:36:32
Comments...
Original Article

Once your Python application is running in production, it becomes a black box. You can no longer attach a debugger or inspect variables in real time. Your only insight into what it’s doing comes from the signals it emits: logs, metrics, traces , and other forms of telemetry.

Among these, logs provide the story your application tells about its own behavior. But that story can be told well or poorly. With basic, unstructured logging, it’s often a fragmented stream of text that adds confusion rather than clarity. But with structured logging, it becomes a queryable timeline of events that supports effective debugging and operational insight.

This guide is a practical walkthrough for building a robust logging system in Python. You’ll move beyond print-style statements and learn how to configure loggers using YAML, enrich log records with contextual data, and integrate logging with modern observability practices .

By the end, you’ll have the tools to turn your application from a black box into one that can be clearly observed and understood.

Let’s get started!

Understanding the logging module

Before you can transform your Python logs into a useful observability signal, you must understand the machinery you’re building upon.

Python’s built-in logging module is powerful, but to use it effectively, you must think about its components not as abstract concepts, but as solutions to real-world problems.

Every call to a logging method (like logger.info() ) creates a LogRecord object, which flows through a processing pipeline composed of four core components: the Logger , Handler , Formatter , and Filter .

Your entry point into this system is the Logger object, and there is a golden rule for acquiring one: always use logging.getLogger(<name>) .

Ignore examples that call level methods on the logging module (like logging.warning(...) ) or call getLogger() with no arguments. Both invoke the root logger , which should be avoided in anything beyond simple scripts.

The root logger has no namespace, so its output gets mixed with logs from third-party libraries. This makes it difficult to control log levels for specific parts of your code or trace a log message’s origin.

The correct pattern is to always create a module-specific logger, and the documentation specifically recommends using the special __name__ variable:

1234

logger = logging.getLogger(__name__)

Using __name__ identifies your logger with the module’s fully qualified path, which naturally builds a hierarchical logger namespace . In Python, loggers form a tree where child loggers inherit settings (like level, handlers, and filters) from their parent unless explicitly overridden.

For example, a logger named my_app.services.billing is considered a child of my_app.services , which is a child of my_app . The ultimate ancestor of all loggers is the root logger.

This hierarchy allows you to define general logging policies for the entire application (e.g. my_app ), while customizing behavior for specific submodules without relying on the root logger. You’ll see the advantages of this model as we go deeper into this tutorial.

Once you have a Logger object handy, the first decision you’ll make is assigning a severity level to your message.

Controlling the signal-to-noise ratio with log levels

In production, logs have a cost: they consume disk, bandwidth, CPU, and money when ingested into observability platforms. Your goal is to maximize signal while minimizing noise, and log levels are your primary tool for this.

When you call methods like logger.info() or logger.warning() , you’re creating a LogRecord and assigning a severity level to it. The logger processes the record only if its level is equal to or higher than the logger’s configured level.

The standard levels in order of increasing severity are:

Level Numeric Value Description
NOTSET 0 Special: inherit from parent logger
DEBUG 10 Detailed diagnostic information
INFO 20 Normal application events
WARNING 30 Potential problems
ERROR 40 Failed operations
CRITICAL 50 Severe failures

Each level has an associated numeric value that determines its priority. The lower the number, the less severe the message. Python uses these numeric values internally to decide whether a given log record should be processed.

NOTSET is a special level that causes a logger to inherit its parent’s level in the hierarchy all the way to the root logger, which defaults to WARNING . Every other level has a corresponding method on the logger:

12345

logger.debug("a debug message")

logger.info("an info message")

logger.warning("a warning message")

logger.error("a error message")

logger.critical("a crit message")

When you set a logger’s level, you’re setting a numeric threshold where only records at that level or higher will be processed. For example, a logger set to WARNING (30) ignores DEBUG and INFO records, but allows WARNING , ERROR , and CRITICAL :

1234

logger = logging.getLogger(__name__)

logger.setLevel(logging.WARNING)

Now that you understand severity levels, the next question is: where do these logs go? That brings us to Handlers.

Handlers define the destination

Handlers are responsible for dispatching LogRecord objects to their final destination . They determine where your logs go.

A key feature of the logging module is that a single logger can have multiple handlers attached, allowing you to simultaneously send the same log record to different places if you wish to.

Examining the default behavior

If no handler is explicitly configured on a logger, Python will check if the logger’s ancestors have any handlers (up to the root logger).

If no handlers are found on any ancestor, Python creates an implicit last-resort fallback handler attached to sys.stderr which always logs at the WARNING level.

You’ll see a very basic output. No level, no logger name, no timestamps — just the message text itself:

This default behavior exists to ensure you see some log output even if you forget to configure logging.

For application developers, it's crucial to always configure your own handlers explicitly to avoid this behavior. Here are the most useful ones you should know about:

StreamHandler

This StreamHandler sends logs to a stream, such as sys.stdout or sys.stderr . This is aligned with the Twelve-Factor App methodology which dictates that an application should not concern itself with log file routing or storage . Instead, it should write its log stream to standard output and let the execution environment take over.

This pattern is standard in containerized environments like Docker and Kubernetes, where the container orchestrator automatically captures and forwards these streams. Likewise, on Linux VMs, Systemd captures all output from a service and directs it to the system journal, making it centrally accessible via journalctl .

123

stream_handler = logging.StreamHandler(stream=sys.stdout)

logger.addHandler(stream_handler)

FileHandler and its subclasses

If you manage your own servers or virtual machines, you may still write logs to files directly. The FileHandler helps you persist logs to the filesystem:

123

file_handler = logging.FileHandler("app.log")

logger.addHandler(file_handler)

To avoid unbounded log file growth, you can use rotation mechanisms provided by RotatingFileHandler or TimedRotatingFileHandler . You can also reach for logrotate , the standard Linux utility for log file rotation.

QueueHandler and QueueListener

The QueueHandler and QueueListener provide a powerful pattern for asynchronous, non-blocking logging in Python. This is crucial for high-performance applications, as it prevents slow I/O operations (like sending a log over the network) from delaying your main application thread.

It works by attaching a QueueHandler to your loggers. Instead of processing the log record itself, its only job is to quickly place the record onto a queue.Queue , which is a fast, in-memory operation.

The QueueListener runs in a separate background thread and continuously monitors the queue for new log records. When it finds one, it removes it from the queue and passes it to one or more "downstream" handlers for actual processing.

This decouples the creation of a log record from the work of processing and outputting it, making your application more responsive.

12345678910111213141516171819202122232425

log_queue = queue.Queue(-1)

console_handler = logging.StreamHandler()

listener = logging.handlers.QueueListener(log_queue, console_handler)

queue_handler = logging.handlers.QueueHandler(log_queue)

logger = logging.getLogger(__name__)

logger.addHandler(queue_handler)

Other notable handlers

  • SysLogHandler can forward logs to a local or remote syslog server.
  • HTTPHandler can POST logs to remote HTTP endpoints.
  • NullHandler acts as a “do-nothing” placeholder which is useful for library authors.
  • MemoryHandler buffers logs in memory and only flushes them to a target handler when triggered (e.g. when an ERROR is logged). This creates a rolling buffer of debug context around failures without polluting the logs during normal operation.

A quick note on handler levels

Each configured handler can have its own level that is independent of the logger’s level. Both levels work together to control what gets emitted:

  • The logger’s level is the first filter. If a log record’s level is lower than the logger’s level, it is never passed to any handlers.
  • The handler’s level applies second. If a handler receives a log record, it will only process it if the record’s level meets or exceeds the handler’s level.

Here’s an example:

12345678910

logger.setLevel(logging.DEBUG)

console_handler = logging.StreamHandler()

console_handler.setLevel(logging.INFO)

file_handler = logging.FileHandler("errors.log")

file_handler.setLevel(logging.ERROR)

logger.addHandler(console_handler)

logger.addHandler(file_handler)

In this setup:

  • All records DEBUG and above are created.
  • The console_handler only processes INFO , WARNING , ERROR , and CRITICAL records.
  • The file_handler only receives ERROR and CRITICAL .

With destinations now defined via handlers, and filtering controlled by levels, the next piece of the puzzle is how your logs are formatted. This brings us to Formatters .

Formatters define the output (and why it should be JSON)

A formatter takes the LogRecord object and serializes it into its final output format. While Python’s logging.Formatter produces simple, human-readable strings by default, modern observability practices favor structured formats like JSON , which are easier to parse, query, and correlate in log management platforms .

To implement JSON logging, you can either follow the official structured logging guide , or use a drop-in library python-json-logger as follows:

1

pip install python-json-logger

12345678910111213

from pythonjsonlogger import jsonlogger

logger = logging.getLogger(__name__)

logger.setLevel(logging.INFO)

handler = logging.StreamHandler()

formatter = jsonlogger.JsonFormatter("%(asctime)s %(name)s %(levelname)s %(message)s")

handler.setFormatter(formatter)

logger.addHandler(handler)

logger.info("an info message")

This produces structured JSON output like:

123456

"asctime": "2025-06-18 13:53:24,986",

"message": "an info message"

To customize the output, you can reference any valid LogRecord attributes in the format string. You can also use a denylist of attributes like this:

12

formatter = jsonlogger.JsonFormatter(reserved_attrs=["pathname", "funcName"])

This yields:

12345678910111213141516171819202122

"message": "an info message",

"msg": "an info message",

"created": 1750251363.462218,

"relativeCreated": 21.619407,

"thread": 140003799217984,

"threadName": "MainThread",

"processName": "MainProcess",

Another useful customization is using rename_fields to map default field names to your own schema:

1234

formatter = jsonlogger.JsonFormatter(

"%(asctime)s %(name)s %(levelname)s %(message)s",

rename_fields={"levelname": "level", "asctime": "time"},

This renames the levelname and asctime fields:

123456

"time": "2025-06-18 14:06:40,333",

"message": "an info message"

Now that you’ve seen how formatters shape the final output of a log message, there’s one more component that completes the logging pipeline: Filters .

Filters are for dynamic control and log enrichment

A filter is a powerful object that you can attach to a logger or a handler to provide fine-grained control over which log records are processed , and to optionally enrich them with additional attributes.

Before a log record reaches a handler, any filters attached to the logger and the handler are applied. Each filter can accept or reject the record. If any filter rejects it, the record is silently dropped either globally or just for that specific handler.

In production, filters are typically used for two purposes:

  • Reducing noise and cost by suppressing high-volume, low-value logs.
  • Injecting dynamic context into log records before they’re formatted.

1. Filtering logs at the source

The most common use case for a filter is to suppress unnecessary logs from known noisy paths. For example, to ignore logs containing /health you can use:

123456789101112131415

class NoHealthChecksFilter(logging.Filter):

def filter(self, record: logging.LogRecord) -> bool:

return "/health" not in record.getMessage()

logger = logging.getLogger("app")

logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler()

handler.addFilter(NoHealthChecksFilter())

logger.addHandler(handler)

logger.info("GET /api request succeeded")

logger.info("GET /health request succeeded")

When attached to a handler (as above), this filter only affects that destination. To filter logs globally , attach it to the logger instead:

1

logger.addFilter(NoHealthChecksFilter())

2. Enriching logs with context

Filters can also mutate log records in-place . Since they have access to the full LogRecord , they can inject dynamic context (like request IDs, user info, or trace IDs) before the formatter runs:

1234

class ContextFilter(logging.Filter):

def filter(self, record):

record.log_id = str(uuid.uuid4())

You’ll see some more interesting examples of this in action later in this article.

Centralizing logging configuration with the dictConfig pattern

You’ve now seen the core components of the logging module: Loggers to create log records, Handlers to set the destination, Formatters to define the structure, and Filters for log filtering and enrichment.

So far, we’ve wired these together using logger.addHandler() and similar calls. But the best practice is to centralize your logging configuration with Python’s built-in logging.config.dictConfig() function. It accepts a dictionary (typically loaded from a YAML file) and configures the entire logging system declaratively.

Here’s an example config.yaml :

config.yaml

1234567891011121314151617181920212223242526

disable_existing_loggers: False

(): pythonjsonlogger.jsonlogger.JsonFormatter

format: "%(asctime)s %(name)s %(levelname)s %(message)s"

class: logging.StreamHandler

formatter: json_formatter

In your application’s entry point, you can load and apply this configuration. Note that PyYAML and python-json-logger must be installed for this to work:

my_app/__init__.py

1234567891011121314

with open("config.yaml", "r") as f:

config = yaml.safe_load(f.read())

logging.config.dictConfig(config)

logger = logging.getLogger(__name__)

logger.debug("This is a debug message.")

logger.info("Application starting up with configuration from YAML.")

logger.warning("This is a warning.")

This approach makes it easy to swap configs between environments (e.g. config.dev.yaml, config.prod.yaml ) based on an environment variable or CLI flag.

The propagate: False setting helps prevent duplicate logs. By default, after a logger handles a log record, it propagates the record up to its parent logger. This continues up the hierarchy until it reaches the root logger.

Setting propagate: False stops this chain reaction. It tells the my_app logger: "You've handled this log. Do not pass it to any ancestor loggers." This ensures each log message is processed exactly once.

With a decoupled logging configuration in place, the mechanics are solved. Next, we’ll focus on enriching the content of your logs by injecting rich contextual attributes.

Adding contextual attributes to Python logs

In production, applications handle many requests concurrently, resulting in interleaved logs that are hard to make sense of without additional structure.

For example, a log like:

123456

"time": "2025-06-18 14:06:40,333",

"message": "Failed to update record in database"

tells you almost nothing about what happened. You’re left wondering: Which record? Which request? What led to this?

To make your logs a useful signal for observability, they must be rich, structured events containing enough detail to answer operational questions.

This is the goal of contextual logging : enriching every log message with consistent metadata relevant to the current operation.

The simplest way to add context is with the extra parameter. Just ensure your keys don’t clash with built-in LogRecord attributes :

123

"Updating user profile", extra={"user_id": "usr-1234"}

This produces:

output

1234567

"time": "2025-06-18 15:43:21,304",

"message": "Updating user profile",

Using the extra parameter is fine for including a one-off attribute in a specific log message. But when an attribute needs to appear consistently across many log statements, passing extra manually would be very tedious.

A more powerful approach is to use a custom filter to inject context automatically into every log record. This keeps your log calls clean while ensuring that every message is enriched with the right metadata.

You can build this up in two stages: first by adding static, global context, and then by adding dynamic, per-request context. Let’s start with the global case.

1. Adding global context

A filter can be used to attach application-wide properties, like the hostname and process ID, to all logs. This establishes a consistent baseline of context for all log messages.

Suppose you have a log_context.py module with the following filter:

log_context.py

1234567891011121314

class ContextFilter(logging.Filter):

def __init__(self, name=''):

self.hostname = socket.gethostname()

self.process_id = os.getpid()

def filter(self, record):

record.hostname = self.hostname

record.process_id = self.process_id

You can then register and attach this filter in your YAML configuration as follows:

config.yaml

1234567891011121314151617

disable_existing_loggers: false

(): log_context.ContextFilter

class: logging.StreamHandler

formatter: json_formatter

With this setup, every log record sent to the console handler will include hostname and process_id , regardless of where the log call originated:

12345678

"time": "2025-06-18 15:59:38,780",

"message": "An info message",

This kind of global context is simple to implement and immediately improves the quality and traceability of logs in distributed environments. In an OpenTelemetry setup, you should use Resource Attributes to tag telemetry data with service-specific metadata, so you can pinpoint application issues down to a specific host, container, or Kubernetes deployment.

2. Adding dynamic, per-request context

To add context that changes with each request (like a request_id ), you can enhance the filter using Python’s contextvars .

This module allows you to store data that is safely isolated to the current execution context, and avoids leakage across concurrent or asynchronous tasks.

Here’s an enhanced log_context.py that supports both global and dynamic context:

log_context.py

1234567891011121314151617181920212223242526272829303132333435363738

from contextlib import contextmanager

_log_context = contextvars.ContextVar("log_context", default={})

class ContextFilter(logging.Filter):

def __init__(self, name=""):

self.hostname = socket.gethostname()

self.process_id = os.getpid()

def filter(self, record):

record.hostname = self.hostname

record.process_id = self.process_id

context = _log_context.get()

for key, value in context.items():

setattr(record, key, value)

def add_to_log_context(**kwargs):

current_context = _log_context.get()

new_context = {**current_context, **kwargs}

token = _log_context.set(new_context)

_log_context.reset(token)

This pattern provides a clean and powerful way to automatically enrich log records. The ContextFilter injects both static, app-wide fields and dynamic, per-request context into every log message. The dynamic context is managed by a ContextVar , which safely tracks per-request data even in asynchronous or multithreaded code.

The add_to_log_context helper lets you temporarily add context inside a with block. It guarantees cleanup after the block exits, ensuring your logs remain consistent and isolated per request.

In your application code, you can use it like this:

main.py

1234567891011121314151617181920212223

from log_filters import log_context, add_to_log_context

from fastapi import FastAPI, Request

async def add_request_context(request: Request, call_next):

request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))

with add_to_log_context(request_id=request_id):

response = await call_next(request)

@app.get("/users/{user_id}")

async def get_user(user_id: str):

with add_to_log_context(user_id=user_id):

logger.info("User profile request received.")

return {"user_id": user_id}

This creates a layered logging context:

  • The middleware adds a request_id that applies to the entire request lifecycle.
  • The route handler adds user_id, scoped to that endpoint’s execution.

Any log message inside those blocks automatically includes both layers of context.

1

curl http://127.0.0.1:8000/users/1234

12345678910

"asctime": "2025-06-19 11:45:33,360",

"message": "User profile request received.",

"request_id": "e36d6a48-5098-4bca-a610-c33e62342a9d",

With a system for enriching our logs with both static and dynamic context, you now have an enriched view of our application’s happy path.

Next, we’ll make sure errors are captured just as richly with full exception context and surrounding debug information, giving you a complete “black box recording” for any failure.

Logging Python errors and exceptions

The goal of exception logging isn’t just to note that an error occurred — it’s to capture enough context to understand why it happened. That includes the full stack trace as well as any structured context you’ve already added.

Handled exceptions

When you anticipate an error and catch it with a try...except block, you’re dealing with a handled exception. The best practice in these cases is to use logger.exception() inside the except block:

1234

logger.exception("An unexpected error occurred")

The logger.exception() method automatically logs:

  • The exception type and message
  • The full traceback (via exc_info )
  • At the ERROR level by default.

123456789

"time": "2025-06-19 12:13:52,544",

"message": "an unexpected error occurred",

"exc_info": "Traceback (most recent call last):\n File \"/home/dash0/demo/python-logging/main.py\", line 21, in <module>\n 1 / 0\n ~~^~~\nZeroDivisionError: division by zero",

If you want to log the exception at a different level (such as CRITICAL ), you can use the level method and explicitly pass exc_info=True :

1

logger.critical("an unexpected error occurred", exc_info=True)

Uncaught exceptions

An unhandled exception is one that escapes all try...except blocks and causes the application to crash. These are critical events, and logging them is essential for post-mortem debugging.

You can achieve this by setting a custom hook for sys.excepthook , which Python calls right before the program terminates due to an unhandled exception:

123456789

def handle_uncaught_exception(exc_type, exc_value, exc_traceback):

"uncaught exception, application will terminate.",

exc_info=(exc_type, exc_value, exc_traceback),

sys.excepthook = handle_uncaught_exception

This setup ensures that any uncaught exceptions are logged at the CRITICAL level before the process exits.

Structuring Python tracebacks

In the exception logs above, you'll notice that the Python traceback is embedded as a single, multi-line string even with JSON logging enabled. To add some structure here, you can create a custom formatter subclass that overrides how exceptions are handled:

custom_formatter.py

1234567891011121314151617

from pythonjsonlogger import jsonlogger

class StructuredExceptionJsonFormatter(jsonlogger.JsonFormatter):

def add_fields(self, log_record, record, message_dict):

super().add_fields(log_record, record, message_dict)

exc_type, exc_value, exc_traceback = record.exc_info

log_record['exception'] = {

'exc_type': exc_type.__name__,

'exc_value': str(exc_value),

'traceback': traceback.format_exception(exc_type, exc_value, exc_traceback)

log_record.pop('exc_info', None)

log_record.pop('exc_text', None)

Then configure it as follows:

config.yaml

12345

(): custom_formatter.StructuredExceptionJsonFormatter

You'll see the following output now:

output

1234567891011121314151617

"time": "2025-07-02 10:33:21,464",

"message": "An unexpected error occurred",

"exc_value": "name 'risky_operation' is not defined",

"Traceback (most recent call last):\n",

" File \"/home/ayo/dev/dash0/demo/python-logging/main.py\", line 102, in <module>\n risky_operation()\n ^^^^^^^^^^^^^^^\n",

"NameError: name 'risky_operation' is not defined\n"

If you'd like a fully serialized Python traceback, you can checkout an alternative library called Structlog which provides a fully structured exception key that looks like this:

1234567891011121314151617181920

"exc_value": "name 'risky_operation' is not defined",

"filename": "/home/ayo/dev/dash0/python-logging/main.py",

Setting up request and error logging

In web APIs, logs that capture the request lifecycle provide the foundation for troubleshooting. A standard pattern, known as request logging or “access logging”, records two events:

  • When the request is received, capturing the intent.
  • When the response is sent, capturing the outcome and duration.

This creates a clear audit trail for every transaction. When combined with contextual logging, it becomes a powerful observability tool.

To demonstrate this, let’s implement request and error logging in a FastAPI application:

123456789101112131415161718192021222324252627282930313233343536373839404142

async def request_logging_middleware(request: Request, call_next):

request_id = request.headers.get("X-Request-ID", str(uuid.uuid4()))

request.state.request_id = request_id

with add_to_log_context(request_id=request_id):

client_ip = request.headers.get("X-Forwarded-For") or request.client.host

"incoming %s request to %s",

"method": request.method,

"path": request.url.path,

"user_agent": request.headers.get("user-agent"),

start_time = time.monotonic()

response = await call_next(request)

duration_ms = (time.monotonic() - start_time) * 1000

if response.status_code >= 500:

log_level = logging.ERROR

elif response.status_code >= 400:

log_level = logging.WARNING

"%s request to %s completed with status %s",

"status_code": response.status_code,

"duration_ms": duration_ms,

This middleware logs an entry message as soon as the request is received, then logs the outcome after the response is generated. The log level is dynamically chosen based on the status code to reflect the severity of the outcome.

FastAPI’s HTTPException is handled automatically by this setup since it produces proper HTTP responses with appropriate status codes, which are reflected in the final log message.

If you’re using this pattern, you’ll likely want to disable FastAPI’s default access logs to avoid redundancy:

Assuming you have the following route:

1234567891011

users = {"abc": "John Doe"}

@app.get("/users/{user_id}")

async def get_user(user_id: str):

with add_to_log_context(user_id=user_id):

logger.info("user profile request received.")

raise HTTPException(status_code=404, detail="user not found")

return {"name": users[user_id]}

You’ll see the following logs for each request:

12

{"time": "[...]", "message": "incoming GET request to /users/abc", "method": "GET", ...}

{"time": "[...]", "message": "GET request to /users/abc completed with status 200", "status_code": 200, ...}

To handle other kinds of exceptions, you’ll need to create an exception handler and add your logging there:

123456789

@app.exception_handler(Exception)

async def unhandled_exception_handler(request: Request, exc: Exception):

request_id = getattr(request.state, "request_id", "")

logger.exception("unhandled exception during request processing",

extra={"request_id": request_id})

content={"detail": "An internal server error occurred."},

With structured logs, dynamic context, and robust error tracking in place, your application is now emitting high-quality logs.

Centralizing your Python logs

Once your Python application is producing structured and well-contextualized logs, the final step is to move them from isolated files or console streams into a centralized observability platform.

Centralizing your logs transforms them from a simple diagnostic record into a powerful, queryable dataset that can be searched, filtered, and analyzed at scale.

More importantly, it allows you to unlock a unified view of your system’s behavior by correlating logs with other telemetry signals like traces and metrics.

A modern observability platform like Dash0 is designed to ingest structured logs and automatically link them with distributed traces via shared correlation IDs.

If you’re already using OpenTelemetry in your application, this integration becomes even more streamlined. This guide on instrumenting Python applications with OpenTelemetry provides a practical walkthrough.

Once integrated, your application’s logs will appear alongside traces and metrics in a unified interface, giving you full visibility into every request, across every service.

Final thoughts

Effective logging is more than just printing messages. It’s about building a structured, queryable record of everything your application does.

By combining Python’s built-in logging module with modern practices like JSON formatting, contextual enrichment, error tracking, and centralized aggregation, you’ve laid the foundation for making your logs actually useful for debugging issues.

Whether you’re debugging a failed request, tracking down performance issues, or analyzing trends across services, these high-quality logs are designed to turn questions into answers.

And with tools like OpenTelemetry, you can connect these detailed logs to distributed traces for rapid and effective root cause analysis across your entire system.

Don’t forget to check out the logging documentation and cookbook for a deeper understanding of the module’s API and a wealth of practical recipes for solving other real-world challenges.

Thanks for reading!

Google’s code review pracitices

Lobsters
google.github.io
2025-12-11 15:31:09
Comments...
Original Article

Google Engineering Practices Documentation

Google has many generalized engineering practices that cover all languages and all projects. These documents represent our collective experience of various best practices that we have developed over time. It is possible that open source projects or other organizations would benefit from this knowledge, so we work to make it available publicly when possible.

Currently this contains the following documents:

Terminology

There is some Google-internal terminology used in some of these documents, which we clarify here for external readers:

  • CL : Stands for “changelist”, which means one self-contained change that has been submitted to version control or which is undergoing code review. Other organizations often call this a “change”, “patch”, or “pull-request”.
  • LGTM : Means “Looks Good to Me”. It is what a code reviewer says when approving a CL.

License

The documents in this project are licensed under the CC-By 3.0 License , which encourages you to share these documents. See https://creativecommons.org/licenses/by/3.0/ for more details.

Creative Commons License

iPhone Typos? It's Not Just You – The iOS Keyboard Is Broken [video]

Hacker News
www.youtube.com
2025-12-11 15:25:43
Comments...

Launch HN: BrowserBook (YC F24) – IDE for deterministic browser automation

Hacker News
news.ycombinator.com
2025-12-11 15:18:51
Comments...
Original Article

Hey HN! We’re Chris, Jorrie, and Evan of BrowserBook, an IDE for writing and debugging Playwright-based web automations. You can download it as a Mac app here: https://browserbook.com , and there’s a demo video at https://www.youtube.com/watch?v=ODGJBCNqGUI .

Why we built this: When we were going through YC, we were a company that automated back-office healthcare workflows. Since the interoperability ecosystem in healthcare is so fragmented, we started using browser agents to automate EMRs, practice management software, and payment portals directly through the web. When we did, we ran into a ton of problems:

Speed: High latency on LLM calls vs. a scripting approach

Cost: We burned through tokens with all the context we needed to make the automations reasonably accurate

Reliability: Even with detailed instructions, context, and tools, agents tended to drift on multi-step tasks in unpredictable ways

Debuggability: When drift did occur, we were essentially playing whack-a-mole in our prompt and re-running the whole automation to debug issues (see above: speed and cost issues made this quite painful)

More and more we were just giving our agent scripts to execute. Eventually, we came to the conclusion that scripting is a better approach for web automation for these sort of use cases. But scripting was also too painful, so we set out to solve those problems with BrowserBook.

Under the hood, it runs a standalone TypeScript REPL wired directly into an inline browser instance, with built-in tooling to make script development quick and easy. This includes:

- A fully interactive browser window directly in the IDE so you can run your code without context switching

- A Jupyter-notebook-style environment - the idea here is you can write portions of your automation in individual cells and run them individually (and quickly reset manually in the browser), instead of having to rerun the whole thing every time

- An AI coding assistant which uses the DOM context of the current page to write automation logic, which helps avoid digging around for selectors

- Helper functions for taking screenshots, data extraction, and managed authentication for auth-required workflows.

Once you’ve created your automation, you can run it directly in the application or in our hosted environment via API, so you can use it in external apps or agentic workflows.

At its core, BrowserBook is an Electron app, so we can run a Chrome instance directly in the app without the need for cloud-hosted browsers. For API runs, we use hosted browser infra via Kernel (which is a fantastic product, btw), relying on their bot anti-detection capabilities (stealth mode, proxies, etc.).

Scripted automation can be unpopular because scripts are inherently brittle; unlike “traditional” software development, your code is deployed in an environment you don’t control - someone else’s website. With BrowserBook, we’re trying to “embrace the suck”, and acknowledge this “offensive programming” environment.

We’ve designed from the ground up to assume scripts will break, and aim to provide the tools that make building and maintaining them easier. In the future, our plan is to leverage AI where it has shown its strength already - writing code - to minimize downtime and quickly repair broken scripts as the deployed environment changes.

Browser agents promised to solve this by handing the reins to an LLM which can handle inconsistency and ambiguity. While we think there are some applications where browser agents can be genuinely helpful, tasks that need to be done reliably and repeatedly are not one of them.

We’d love for you to try it out! You can download BrowserBook from our website here: https://browserbook.com (only available for Mac so far, sorry!) And of course, we’d appreciate any feedback and comments you have!

New ConsentFix attack hijacks Microsoft accounts via Azure CLI

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 15:10:49
A new variation of the ClickFix attack dubbed 'ConsentFix' abuses the Azure CLI OAuth app to hijack Microsoft accounts without the need for a password or to bypass multi-factor authentication (MFA) verifications. [...]...
Original Article

Microsoft

A new variation of the ClickFix attack dubbed 'ConsentFix' abuses the Azure CLI OAuth app to hijack Microsoft accounts without the need for a password or to bypass multi-factor authentication (MFA) verifications.

A ClickFix attack is a social engineering technique that attempts to trick users into running commands on their computer to install malware or steal data. They commonly use fake instructions that pretend to fix an error or verify that they are human and not a bot.

This new ConsentFix variant was discovered by cybersecurity firm Push Security, which explains that the ConsentFix technique steals OAuth 2.0 authorization codes that can be used to obtain an Azure CLI access token.

Azure CLI is a Microsoft command-line application that uses an OAuth flow to let users authenticate and manage Azure and Microsoft 365 resources from their local machine. In this campaign, attackers trick victims into completing that Azure CLI OAuth flow and then steal the resulting authorization code, which they exchange for full account access without needing the user's password or MFA.

The ConsentFix attack

A ConsentFix attack starts with the victim landing on a compromised, legitimate website that ranks high on Google Search results for specific terms.

The visitor is shown a fake Cloudflare Turnstile CAPTCHA widget that asks for a valid business email address. The attacker's script checks this address against a list of intended targets, filtering out bots, analysts, and anyone else not on the target list.

Victim prompted to enter their email address
Victim prompted to enter their email address
Source: Push Security

Users who pass this check are shown a page that resembles ClickFix interaction patterns, providing the victim with instructions to verify they are human.

These instructions are to click the 'Sign in' button on the page, which opens a legitimate Microsoft URL in a new tab.

The ClickFix-styled page that steals the URL with the code
The ClickFix-styled page that steals the URL with the code
Source: Push Security

However, this is not your typical Microsoft login prompt, but rather an Azure login page used to generate an Azure CLI OAuth access code.

Microsoft Azure CLI login page
Microsoft Azure CLI login page
Source: BleepingComputer

If the user is already logged into the Microsoft account, they only need to select their account; otherwise, they authenticate normally on Microsoft's real login page.

Once this happens, Microsoft redirects them to a localhost page, and the browser address bar now displays a URL containing an Azure CLI OAuth authorization code tied to the user's account.

The phishing process completes when the user pastes the URL into the malicious page, as per the provided instructions, granting the attacker access to the Microsoft account via the Azure CLI OAuth app.

"Once the steps are completed, the victim has effectively granted the attacker access to their Microsoft account via Azure CLI," explains Push .

"At this point, the attacker has effective control of the victim's Microsoft account, but without ever needing to phish a password or pass an MFA check."

"In fact, if the user was already logged in to their Microsoft account (i.e., they had an active session), no login is required at all."

Push says the attack triggers only once per victim IP address, so even if valid targets return to the same phishing page, they will not get the Cloudflare Turnstile check.

The researchers suggest that defenders look for unusual Azure CLI login activity, such as logins from new IP addresses, and monitor for legacy Graph scopes, which attackers intentionally leverage to evade detection.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Lander Opportunity

hellgate
hellgatenyc.com
2025-12-11 15:07:19
The comptroller launches his bid to knock off Dan Goldman. And more links for your Thursday....
Original Article
Lander Opportunity
Brad Lander at the launch party of his run for Congress on Wednesday. (Hell Gate)

Morning Spew

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

AI is accelerating cyberattacks. Is your network prepared?

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 15:05:15
AI-driven attacks now automate reconnaissance, generate malware variants, and evade detection at a speed that overwhelms traditional defenses. Corelight explains how network detection and response (NDR) provides the visibility and behavioral insights SOC teams need to spot and stop these fast-moving...
Original Article

Lava lamp with warning

Cyber security is under intense scrutiny these days, especially as more adversarial AI-based attacks such as Scattered Spider can use a variety of living-off-the-land methods to spread and speed their impact and disguise their operations. This means that defending today’s networks requires a quicker and more sophisticated, in-depth response.

Offensive AI is beginning to thrive: Google’s Threat Intelligence group has tracked new and maturing AI-fueled attack methods including AI tools that can bypass safety guardrails, generate malicious scripts, and automatically evade detection. Anthropic has observed what it calls the first known use of AI-based orchestration to stitch together different pieces of malware to perform network reconnaissance, discover vulnerabilities, move laterally across a target network, and harvest data.

This AI orchestration can happen at a speed and scale that could easily overwhelm manual detection and remediation methods. These are new attacks in every sense of the word and use the automation and intelligence of a machine learning algorithm to subvert digital defenses.

These attacks are just the beginning of how AI can be used to bypass traditional security protections. While the history of credential compromise goes back decades, what is new is the level of scale that can be accomplished with just a few AI prompts and how that can leverage AI-powered harvesting to collect a huge amount of stolen data.

This is just one way bad actors use AI. This Cloud Security Alliance report from June 2025 lists more than 70 different ways that autonomous AI-based agents can be used to attack enterprise systems, and demonstrates how these agents significantly expand the attack surface beyond traditional trust boundaries and security practices.

Truly, nothing is safe any longer, and we are firmly now in the era of zero trust. Since the term was first coined by John Kindervag in 2009 when he was at Forrester Research, it has blossomed into an almost universal set of circumstances. The difference with today’s networks is that SOC analysts also can’t take anything for granted, and have to become more effective at finding and stopping attacks no matter where they originate.

Why NDR matters against AI-powered attacks

As organizations search for better ways to defend new AI threats, they are turning to network visibility to understand how these techniques can be useful as a defensive mechanism.

Unlike legacy solutions that focus on blocking known traffic signatures or rely on manual investigation, network detection & response (NDR) systems continuously monitor and analyze network data, provide real-time insight to detect fast-moving, deceptive AI-based threats and automatically identify abnormal data transfers and network traffic patterns. These systems augment simple network visibility with real-time analytics.

Corelight Investigator Dashboard
Corelight Investigator Dashboard

Today’s legacy defensive systems were designed to focus on these known threats in the era before AI could create thousands of customized malware variants.

This could be one reason why searching for “Network Detection and Response solutions” is on the increase, according to both Google Trends and Gartner’s Magic Quadrant™ for Network Detection and Response report.

Here is how today's NDR systems can counter AI-driven attacks:

— Identifying and counteracting AI-fueled reconnaissance campaigns and other fast-moving polymorphic attacks. What these attacks have in common is the ability to use automated techniques to test for unprotected points of entry or for unpatched vulnerabilities. It is as if a burglar were able to quickly jiggle a series of hundreds of locked doors to find the one that has been accidentally left open. An NDR solution can keep pace with higher traffic volumes generated by these automated systems—and is able to process all this data in a timely fashion—which is critical to finding the intruder hidden amongst normal activity.

NDR systems employ real-time monitoring to inspect all network traffic. This enables them to detect threats and reconstruct timelines and components of various types of attacks. As an example, these systems often include various automation and AI/ML methods to reveal things such as lateral network threat movement or other anomalous behavior that is a sign of a bad actor’s evasive approaches. An NDR solution should also be able to separate false positives from true positive alerts while providing the surrounding context and network-based implications to investigate true threats.

— Second, to summarize and analyze what is happening across an enterprise’s networks and cloud estates. This could be calculating the ratio of encrypted to unencrypted traffic, for example, and comparing it with a historical baseline, or observing that a network router has never used SSH to connect to the internet previously and is now using this protocol. Or, it could identify connections to new services or IP addresses. Insights like these are helpful for security teams, giving them better context during their investigations and helping them understand how network traffic changes over time.

— Third, be able to save these patterns to some storage medium for future inspection and analysis. The system can recognize and extract individual files and analyze them for further action, such as to set up specific policies to prevent this behavior, or to see what happened in the past to circumvent defenses. One example is to record an inappropriate file upload that uses an image extension, such as .jpg or .png, but is actually an executable file that could form the basis of an attack.

— Finally, to be able to render a verdict on whether some event is benign, suspect, or malicious , and do so by using automated methods to go beyond recognizing simple malware signatures or behaviors. This can help reduce pressures on SOC analysts and eliminate false positives. Using the SSH traffic exploit example mentioned above, while NDR can’t look inside the encrypted traffic, it can easily identify that this is a new situation and flag it as a potential abuse thanks to its network visibility powers. As attackers step up their game with more ways to hide their efforts, it becomes more important to quickly triage any potentially harmful event.

As adversaries increasingly lean into AI to evade legacy defenses, network visibility is helping SOCs to spot their moves. Whether they’re probing entry points, moving laterally, or hiding in plain sight, SOCs are intercepting them before real damage is done.

NDR’s unique strength lies in providing actionable insights for analysts to troubleshoot problems that traditional tools ignore or bury deep within their logs. Incident responders can quickly investigate unusual network traffic or suspicious application use, identify hidden malware or intruders, and resolve incidents faster. They're given the potential to reduce the blast radius of a malware infection or prevent a bad actor from stealing proprietary data.

With widespread environmental visibility and faster response, NDR gives organizations agility, preparing them for a future in which attackers leverage ever-evolving, AI-driven tactics.

To discover more about Corelight NDR, visit corelight.com/elitedefense .

Sponsored and written by Corelight .

Did that Colorado station sign say gas for only $1.69? Yes, it did

Hacker News
coloradosun.com
2025-12-11 15:01:49
Comments...
Original Article

National gasoline prices have fallen below $3 a gallon on average for the first time in four years, and Colorado’s prices are even cheaper, as regular drivers have no doubt noticed. The average gallon in Colorado is hovering at about $2.47, at least 40 cents below the price this time last year.

But local drivers may not know why. With the help of some local experts, we’re here to tell you.

It’s because Suncor, the refinery that is the major supplier for Colorado, is buying really cheap Canadian oil.

What’s the price in your neighborhood?

Colorado gas prices are low, but mountain towns still pay a big premium. Help us tell the real story about consumer costs by snapping a photo of your neighborhood gas price, along with where it is in the state, and we’ll post it on our Instagram page .

It’s because OPEC is pumping at major volumes around the world, even though their prices are lower than they want.

And it’s because Buc-ee’s is happy to break even at their massive gas pump array in Johnstown so long as you also walk inside and overspend on beef jerky, neck pillows and beaver-themed sleepwear.

National gas prices hit a four-year low last week according to AAA, averaging $2.99 a gallon, the first dip below $3 in that period. Colorado has it even better, according to AAA’s local price wizard Skyler McKinley. There are neighborhood stations on the Front Range selling at $1.69 a gallon, in fierce price fights with competitors across the street, McKinley said.

Suncor’s Commerce City refinery, the only one in the state, gets most of its crude oil in a pipeline straight from Canada’s controversial oil sands fields. While the international benchmark West Texas Intermediate is selling for $57 a barrel, McKinley noted, Suncor can get Canadian oil at $47.

Growth of the mega-fueling convenience stores, from Buc-ee’s to QuikTrip to Maverik, in the Front Range suburbs is also bringing prices down, McKinley noted. They don’t need to turn a profit on gasoline, because they have thousands of square feet of retail space selling us water-based drinks at $6 a pop, or a hundred varieties of dried meat, or every known variety of salty chip.

That competition doesn’t help the mountain towns, though — Pitkin County is averaging $4.30 a gallon right now, McKinley said, and Front Range drivers who stay home don’t know how good they have it.

McKinley’s other job is running a bar in Routt County, where there’s no Buc-ee’s in a toothy price fight with a Maverik.

“Nobody up here is saying gas prices are cheap. There’s one gas station in this town I’m in, and they’ll charge a comparatively higher margin because they’re able to,” McKinley said.

It’s not your faulty memory, gasoline prices really are lower this holiday season than they’ve been in years. Colorado prices are even lower. (AAA)

Still, it never makes sense for Front Range drivers to go a long way past their $2.30 a gallon local station in search of that elusive $1.60, he said.

“It’s not good economically, once you factor in the gas to get there and the wear and tear on your vehicle, you’re losing money. It would have to be a major, major, major price gap,” he said.

Electric impact

The growth of electric-powered vehicles on Colorado’s roads has not yet contributed to falling gasoline prices, McKinley said. While new EV sales have been robust, the electric fleet is still small compared with the enormous fleet of gasoline cars bought over decades. Besides, among the 75% to 80% of the public still buying gasoline vehicles, Colorado has a well-documented preference for relatively low-mileage pickup trucks and SUVs.

When EVs will make a gasoline price mark “is a great question,” McKinley said. “We’ve just not reached that saturation in the market and in Colorado,” even though the state leads the pack in EV adoption nationally. Even those who have bought EVs are still using them primarily for local trips, not the 200-mile round trips from plains to Front Range cities, or from Front Range cities to resort areas.

“So, look, we’re still a gas-guzzling state, for better, for worse,” he said.

And the Trump administration is only accelerating those habits, clean energy advocates say, with its recent decision to abandon ambitious Democrat-era mileage standards for America’s new vehicle fleet . The Biden administration’s requirement was that each auto manufacturer average about 50 miles per gallon in the vehicles it sells by 2031. They could achieve that by a mix of making more efficient gasoline vehicles and selling more EVs, whose fuel efficiency helps raise the average.

Trump’s agencies are rolling back the Corporate Average Fuel Economy, or CAFE, standard to 34 miles a gallon by 2031. Aside from pollution issues, said Travis Madsen, transportation analyst with the Boulder-based nonprofit Southwest Energy Efficiency Project, the administration’s rollback is a bad economic move for many Americans.

“It’s a step backward on affordability. These fuel economy rules are one of the most important things the federal government has ever done to save people money,” Madsen said. “If you look at the results of the policy, since it was first put into place in 1975, people have saved well over a trillion dollars on fuel that otherwise would have literally gone up in smoke. And there is definitely potential to save more fuel and to save more money.”

Pollution potential is a factor as well, Madsen said. “If you’re burning more fuel than you need to in your vehicle, that results in air pollution, particularly we’re talking about carbon dioxide, which is the main pollutant that’s driving climate change. So the less efficient our vehicles are, the more polluting they are.”

Coloradans weighing the current cheap gas prices and the loss of some EV subsidies when considering buying a new car should know what they are comparing, Madsen added. Despite public complaints about recent increases in their electrical bills, owning and maintaining an EV over time is still a big money-saver, he said.

“Especially if they’re able to charge at home, they have access to pretty cheap electricity,” Madsen said. A Denver driver charging an EV at home overnight is paying for energy equivalent to well below $1 a gallon of gasoline. “If the signs in front of the gas station started to get down near $1 a gallon, or lower, that might start affecting this affordability argument. And adjusted for inflation, gasoline has never been that cheap.”

Type of Story: News

Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.

An Orbital House of Cards: Frequent Megaconstellation Close Conjunctions

Hacker News
arxiv.org
2025-12-11 15:01:44
Comments...
Original Article

View PDF HTML (experimental)

Abstract: The number of objects in orbit is rapidly increasing, primarily driven by the launch of megaconstellations, an approach to satellite constellation design that involves large numbers of satellites paired with their rapid launch and disposal. While satellites provide many benefits to society, their use comes with challenges, including the growth of space debris, collisions, ground casualty risks, optical and radio-spectrum pollution, and the alteration of Earth's upper atmosphere through rocket emissions and reentry ablation. There is substantial potential for current or planned actions in orbit to cause serious degradation of the orbital environment or lead to catastrophic outcomes, highlighting the urgent need to find better ways to quantify stress on the orbital environment. Here we propose a new metric, the CRASH Clock, that measures such stress in terms of the time it takes for a catastrophic collision to occur if there are no collision avoidance manoeuvres or there is a severe loss in situational awareness. Our calculations show the CRASH Clock is currently 2.8 days, which suggests there is now little time to recover from a wide-spread disruptive event, such as a solar storm. This is in stark contrast to the pre-megaconstellation era: in 2018, the CRASH Clock was 121 days.

Submission history

From: Sarah Thiele [ view email ]
[v1] Wed, 10 Dec 2025 13:37:34 UTC (401 KB)

How Linux Is Built - Greg Kroah-Hartman

Lobsters
www.youtube.com
2025-12-11 15:01:21
Comments...

From text to token: How tokenization pipelines work

Hacker News
www.paradedb.com
2025-12-11 14:45:49
Comments...
Original Article

James Blackwood-Sewell headshot

By James Blackwood-Sewell on October 10, 2025

From Text to Token: How Tokenization Pipelines Work

When you type a sentence into a search box, it’s easy to imagine the search engine seeing the same thing you do. In reality, search engines (or search databases ) don’t store blobs of text, and they don’t store sentences. They don’t even store words in the way we think of them. They dismantle input text (both indexed and query), scrub it clean, and reassemble it into something slightly more abstract and far more useful: tokens. These tokens are what you search with, and what is stored in your inverted indexes to search over.

Let’s slow down and watch that pipeline in action, pausing at each stage to see how language is broken apart and remade, and how that affects results.

We’ll use a twist on "The quick brown fox jumps over the lazy dog" as our test case. It has everything that makes tokenization interesting: capitalization, punctuation, an accent, and words that change as they move through the pipeline. By the end, it’ll look different, but be perfectly prepared for search.

The full-text database jumped over the lazy café dog

This isn’t a complete pipeline, just a look at some of the common filters you’ll find in lexical search systems. Different databases and search engines expose many of these filters as composable building blocks that you can enable, disable, or reorder to suit your needs.

The same general ideas apply whether you're using Lucene / Elasticsearch , Tantivy / ParadeDB , or Postgres full-text search.

Filtering Text With Case and Character Folding

Before we even think about breaking our text down we need to think about filtering out anything which isn’t useful. This usually means auditing the characters which make up our text string: transforming all letters to lower-case, and if we know we might have them folding any diacritics (like in résumé, façade, or Noël) to their base letter.

This step ensures that characters are normalized and consistent before tokenization begins. Café becomes cafe, and résumé becomes resume, allowing searches to match regardless of accents. Lowercasing ensures that database matches Database, though it can introduce quirks: like matching Olive (the name) with olive (the snack). Most systems accept this trade-off: false positives are better than missed results. Code search is a notable exception, since it often needs to preserve symbols and respect casing like camelCase or PascalCase .

Let’s take a look at how our input string is transformed. We are replacing the capital T with a lower-case one, and also folding the é to an e . Nothing too surprising here. All of these boxes are interactive, so feel free to put in your own sentences to see the results.

lowercase & fold diacritics

the full-text database jumped over the lazy cafe dog

Of course, there are many more filters that can be applied here, but for the sake of brevity, let’s move on.

Splitting Text Into Searchable Pieces with Tokenization

The tokenization phase takes our filtered text and splits it up into indexable units. This is where we move from dealing with a sentence as a single unit to treating it as a collection of discrete, searchable parts called tokens.

The most common approach for English text is simple whitespace and punctuation tokenization: split on spaces and marks, and you’ve got tokens. But even this basic step has nuances: tabs, line breaks, or hyphenated words like full-text can all behave differently. Each system has its quirks, the default Lucene tokenizer turns it’s into [it's] , while the Tantivy splits into [it, s] 1 .

Generally speaking there are three classes of tokenizers:

  1. Word oriented tokenizers break text into individual words at word boundaries. This includes simple whitespace tokenizers that split on spaces, as well as more sophisticated language-aware tokenizers that understand non-English character sets 2 . These work well for most search applications where you want to match whole words.

  2. Partial Word Tokenizers split words into smaller fragments, useful for matching parts of words or handling compound terms. N-gram tokenizers create overlapping character sequences, while edge n-gram tokenizers focus on prefixes or suffixes. These are powerful for autocomplete features and fuzzy matching but can create noise in search results.

  3. Structured Text Tokenizers are designed for specific data formats like URLs, email addresses, file paths, or structured data. They preserve meaningful delimiters and handle domain-specific patterns that would be mangled by general-purpose tokenizers. These can be essential when your content contains non-prose text that needs special handling.

For our example we will be using a simple tokenizer, but you can also toggle to a trigram (an n-gram with a length of 3) tokenizer below to get a feel for how different the output would be (don't forget you can change the text in the top box to play round).

split on whitespace and punctuation

the full text database jumped over the lazy cafe dog

Throwing Out Filler With Stopwords

Some words carry little weight. They appear everywhere, diluting meaning: "the", "and", "of", "are". These are stopwords. Search engines often throw them out entirely 3 , betting that what remains will carry more signal.

This is not without risk. In The Who , "the" matters. That's why stopword lists are usually configurable 4 and not universal. In systems which support BM25 they are often left out altogether because the ranking formula gives less weight to very common terms, but in systems which don't support BM25 (like Postgres tsvector) stopwords are critically important.

full text database jumped over lazy cafe dog

Notice how removing stopwords immediately makes our token list more focused? We've gone from ten tokens to eight, and what remains carries more semantic weight.

Cutting Down to the Root with Stemming

Jump , jumps , jumped , jumping . Humans see the connection instantly. Computers don't, unless we give them a way 5 .

Enter stemming. A stemmer is a rule-based machine that chops words down to a common core. Sometimes this happens elegantly, and sometimes it happens brutally. The foundation for most modern English stemming comes from Martin Porter's 1980 algorithm , which defined the approach that gave search engines consistent rules for stripping suffixes while respecting word structure. Today many stemmers are based on the Snowball variant.

The results can look odd. Database becomes databas, lazy becomes lazi. But that's okay because stemmers don't care about aesthetics, they care about consistency. If every form of lazy collapses to lazi, the search engine can treat them as one 6 . There's also lemmatization, which uses linguistic knowledge to convert words to their dictionary forms, but it's more complex and computationally expensive than stemming's "good enough" approach 7 .

full text databas jump over lazi cafe dog

Here's the final transformation: our tokens have been reduced to their essential stems. Jumped becomes jump, lazy becomes lazi, and database becomes databas. These stems might not look like real words, but they serve a crucial purpose: they're consistent. Whether someone searches for jumping, jumped, or jumps, they'll all reduce to jump and match our indexed content. This is the power of stemming: bridging the gap between the many ways humans express the same concept.

The Final Tokens

Our sentence has traveled through the complete pipeline. What started as "The full-text database jumped over the lazy café dog" has been transformed through each stage: stripped of punctuation and capitalization, split into individual words, filtered of common stopwords, and finally reduced to stems.

The result is a clean set of eight tokens:

full text databas jump over lazi cafe dog

This transformation is applied to any data we store in our inverted index, and also to our queries. When someone searches for "databases are jumping," that query gets tokenized: lowercased, split, stopwords removed, and stemmed. It becomes databas and jump , which will match our indexed content perfectly.

Why Tokenization Matters

Tokenization doesn’t get the glory. Nobody brags about their stopword filter at conferences. But it’s the quiet engine of search. Without it, dogs wouldn’t match dog , and jumping wouldn’t find jump .

Every search engine invests heavily here because everything else (scoring, ranking, relevance) depends on getting tokens right. It’s not glamorous, but it’s precise, and when you get this part right, everything else in search works better.

Get started with ParadeDB and see how modern search databases handle tokenization for you.


  1. Which is better? That depends: it's seems more correct and skips storing a useless s token, but it wouldn't match on a search for it .

  2. General purpose morphological libraries like Jieba and Lindera are often used to provide tokenizers that can deal with Chinese, Korean, Japanese and characters.

  3. When we remove stopwords we still keep the original position of each token in the document. This allows positional queries ("find cat within five words of dog " even though we have discarded words.

  4. Lucene and Tantivy both have stopwords off by default, and when enabled for English they use the same default list: [a, an, and, are, as, at, be, but, by, for, if, in,into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with]

  5. Another method is vector search, which trades lexical stemming for searching over semantic meaning.

  6. Words can also be overstemmed, consider university and universe which both stem to univers , but have very different meanings.

  7. Lemmatization uses actual word lists to make sure it only makes real words. To do this it accurately it needs to know the "part of speech" the source word is (noun, verb, etc..).

Disney to invest $1bn in OpenAI, allowing use of characters in video generation tool

Guardian
www.theguardian.com
2025-12-11 14:31:52
Agreement comes amid anxiety in Hollywood over impact of AI on the industry, expression and rights of creators Walt Disney has announced a $1bn equity investment in OpenAI, enabling the AI start-up’s Sora video generation tool to use its characters. Users of Sora will be able to generate short, user...
Original Article

Walt Disney has announced a $1bn equity investment in OpenAI, enabling the AI start-up’s Sora video generation tool to use its characters.

Users of Sora will be able to generate short, user-prompted social videos that draw on more than 200 Disney, Marvel, Pixar and Star Wars characters as part of a three-year licensing agreement between OpenAI and the entertainment giant.

A selection of the videos made by users will also be available for streaming on the Disney+ platform.

Bob Iger, Disney’s CEO, hailed a deal which paired his firm’s “iconic stories and characters” with OpenAI’s AI technology. It will place “imagination and creativity directly into the hands of Disney fans in ways we’ve never seen before”, he claimed.

But it comes amid intense anxiety in Hollywood over the impact of artificial intelligence on the industry, expression and rights of creators.

Disney will also use OpenAI’s application programming interfaces to build new products and tools, becoming a major customer of the ChatGPT maker. It will also deploy ChatGPT for its employees, the companies said.

“Technological innovation has continually shaped the evolution of entertainment, bringing with it new ways to create and share great stories with the world,” said Iger. “The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works.”

Reuters contributed reporting

A pragmatic guide to modern CSS colours - part two

Lobsters
piccalil.li
2025-12-11 14:30:49
Comments...
Original Article

In my previous article on colours , I dove into the practical side of the new colour features for developers who primarily copy and paste values from a design file into their editor.

With all the new colour features that we have in CSS now, we can do more with colours in the browser than designers can do in their design apps, and it opens up a whole world of possibilities.

Manipulating colours

In the previous article, we looked at some basic use cases of relative colours. As a quick recap, the syntax looks something like this:


:root {
  --primary: #ff0000;
}

.primary-bg-50-opacity: {
  background: hsl(from var(--primary) h s l / .5);
}

The most important part of the above example is that the h s l letters aren’t just letters, they are variables which contain the hue, saturation, and lightness values of the original colour.

We can replace those letters with a value. For example, #00ff00 has no blue in it, so we could add some by replacing the b (which in this case is 0 ), with a number to add some blue in:


.green-with-a-touch-of-blue {
  color: rgb(from #00ff00 r g 25);
}

This works, but only if you know how much blue the original colour had to start with. Hard-coding 25 in there might increase the blue, but say we had #00ff55 , it would decrease the value instead.

The real power comes when we use calc() .


.green-with-a-touch-of-blue {
  color: rgb(from #00ff00 r g calc(b + 25));
}

While I’m using rgb() above, in general I find it hard to work with. Instead, I tend to stick with hsl() and oklch() , where it can be a lot more useful.

With both of those colour functions, the hue value goes from 0 to 360, but if you go larger than 360, it simply loops around, so if you add 180 to the hue, you’ll always get the colour from the other side of the colour wheel.


:root {
	--color-primary: #2563eb;
	
	--color-secondary:
		hsl(from var(--color-primary) 
		    calc(h + 120) s l);
		    
	--color-tertiary: 
		hsl(from var(--color-primary) 
	      calc(h - 120) s l);
}

Or if you’re after more of a tertiary colour scheme:


:root {
	--color-primary: #2563eb;
	
	--color-secondary:
		hsl(from var(--color-primary) 
		    calc(h + 120) s l);
		    
	--color-tertiary: 
		hsl(from var(--color-primary) 
	      calc(h - 120) s l);
}

This approach also makes it easy to create lighter/darker colours without much work:


:root {
	--primary-base: hsl(221 83% 50%);
	
	--primary-100: 
	  hsl(from var(--primary-base) 
		    h s 10%);
	--primary-200: 
	  hsl(from var(--primary-base) 
		    h s 20%);
  --primary-300: 
	  hsl(from var(--primary-base) 
		    h s 30%);
  /* etc */
}

This can work, but you might not want to have a base colour and modify the lightness values to specific stops, but instead get a little lighter or darker, regardless of what the base colours lightness value is.

Once again, we can use a calc() to help with that.


:root {
  --color-primary-base: #2563eb;
  --color-primary-lighter: hsl(from var(--color-primary-base) h s calc(l + 25));
  --color-primary-darker: hsl(from var(--color-primary-base) h s calc(l - 25));
}

Surface levels

One nice use case for this approach is to create surface levels. In light themes, we can rely solely on shadows to help distinguish the surface levels, but shadows on dark backgrounds don’t do very much. Instead, we want to get slightly lighter on each surface level in a dark theme, which we can do with a little help of light-dark() and custom properties.


:root {
  --surface-base-light: hsl(240 67% 97%);
  --surface-base-dark: hsl(252 21% 9%);
	/* shadows are in the codepen below */
}

.surface-1 {
  background: 
	  light-dark(
		  var(--surface-base-light), 
		  var(--surface-base-dark));
}

.surface-2 {
	background: 
		light-dark(
	    var(--surface-base-light),
	    hsl(from var(--surface-base-dark) h s calc(l + 4))
	  );
}

.surface-3 {
	background: light-dark(
	    var(--surface-base-light),
	    hsl(from var(--surface-base-dark) h s calc(l + 8))
	  );
}

Creating a full colour scheme

While adjusting lightness values can come in handy for simple tasks, when creating a colour scheme, it’s common to use what the design world calls perceptual colour scaling , where the hue and saturation also shift by small amounts.

In general, as you increase the lightness, you also want to slightly increase the saturation, while moving the hue to slightly cooler values, and reduce the saturation as you get darker. To do that, we can make small tweaks to the hue and saturation using calc() as we create our tints and shades.


:root {
	--primary-base: hsl(221 83% 50%);
	
	--primary-400: 
	  hsl(from var(--primary-base) 
		    calc(h - 3) 
		    calc(s + 5) 
		    60%);
  --primary-300: 
	  hsl(from var(--primary-base) 
		    calc(h - 6) 
		    calc(s + 10) 
		    70%);
  /* etc */
}

Here’s an example of that in action, along with a version where there are no hue or saturation shifts under it. The most obvious differences are in the 100, 200, and 300 swatches where, when only shifting the lightness value, the colour appears to loose a bit of its vibrancy.

It is a bit of setup, but once you have one you are happy with, it should work great for all your colours.

You can take this a step further with some more advanced math, as Matthias Ott has showed at CSS Day 2024 (timestamped to the relevant part of the talk).

Advert Merch available now. CSS programmer t-shirt is on display

Speaking of perceptual colours

One of the benefits of using hsl() is that it’s so easy to predict what a colour will look like. The downside of it is that even if you maintain the saturation and lightness consistent, as you shift through the different hue values, perceptually some colours will appear brighter than others .

Below, the saturation and lightness of both of the green and blue are the same, yet the text on the blue is easy to read, with a contrast ratio of over 5, while the text on the green background is very hard to even see, with a contrast ratio barely over 1.

This is where oklch() comes in.

oklch() works very similarly to hsl() , and is based on the LCH colour space (also known as the HCL colour space ), which was created to help with perception of colours as we shift through the hues. Here, I’ve once again, started with a blue and only changed the hue valued for the green and the result: we don’t run into the same issue.

With the LCH colour model, the first value is the lightness, which works on a scale of 0 to 1. The way it’s calculated is a bit different from HSL, as it’s based on the perceptual lightness, but the concept is the same, with 0 being black, and 1 being white. You can also use percentages if you’d prefer.

The hue, which is the last value, works the same way as it does in hsl() , with the important difference that 0 in hsl() is red, while 0 in lch() is magenta.

In this CodePen, both swatches are using the same angle for their colour, and as you can see, they’re quite different:

Lastly, we have the biggest difference between the two, which is the Chroma value . It’s similar to the saturation of hsl() , with 0 being grey, and the higher the number, the more “pure” it becomes.

The scale for the Chroma is 0 to… well, this is where things get strange .

Theoretically, there is no upper bound on the Chroma value because colours are strange. In practice, the largest value is around 0.4 , which is what 100% maps to if you use percentage instead of a unit-less value.

Sticking with a percentage sounds like the best solution but the big problem with Chroma is the upper bound of it changes depending on the hue and lightness values . This can make for some pretty unexpected results…

I was very excited about LCH coming to CSS, but I’ve found myself sticking to hsl() for a lot of things because of the variable upper limit of the Chroma value, and the strange shifts that can happen like we can see above.

As awkward as the Chroma values can be, there are still benefits to using it . The wider gamut is nice, but knowing the perceived brightness is the same across the hues really does come in handy.

The hardest part is getting the initial colour. One option is to use a colour picker. This colour picker has a nice visualisation of the limits on Chroma as you change the lightness and hue.

However, there’s another option thanks to relative colours! In part one , I looked at an example of how relative colours can be useful for a toast notification, and we can improve upon that version by using oklch() , with the advantage of the base colours still using hsl() !


.toast {
  --base-color: hsl(225, 87%, 56%);
}

[data-toast="info"] {
  --toast-color: oklch(from var(--base-toast-color) l c 275);
}

[data-toast="warning"] {
  --toast-color: oklch(from var(--base-toast-color) l c 80);
}

[data-toast="error"] {
  --toast-color: oklch(from var(--base-toast-color) l c 35);
}

As you can see here, the oklch() version is more consistent in its styling from one to the next. It’s most noticeable in the borders, where the contrast between the border and background changes by quite a lot in the hsl() versions, but there’s also a general inconsistency with the perceived saturation between each one as well.

oklch() vs lch()

I need to mention that we have both oklch() and lch() in CSS (as well as oklab() and lab() ). The purpose of the LCH colour space was to match human perception across hues as closely as possible.

It was created in 1976, and had some flaws in it, mostly with the blues and purple ranges, so in 2020, they created OKLCH as a new version of LCH, which fixes the issues with it.

If you’d like more information on the two of them, I’d suggest checking out this article , but really you can keep it simple and simply use oklch() .

Mixing two different colours

Relative colours are great when we want to modify a channel (or multiple channels) of a specific colour, but you might have instances where you have two different colours that you want to mix together.

For that, we have the color-mix() function, which allows us to mix two colours together.


.purple {
  color: color-mix(in srgb, red, blue);
}

We have to define a colour space (for now)

You might have noticed that there is an in srgb as the first argument in the example above. For now, we have to define what colour space you want to use, and they can all give you some pretty different results.

I generally try oklab , then oklch to start, and most of the time I’m happy with one of them, but I’ll sometimes experiment to see what the others give me.

Additionally, the CSS Working Group has recently resolved to make oklab the default value, so once browsers implement that change, you’ll no longer have to provide a colour space (but you will be able to if you want).

Advert Take the fast-track to levelling up your front-end skills. Check out Piccalilli courses

Controlling the amount of each colour

When we use color-mix() , it will use 50% of each colour by default. We can control how much of a given one that that we want as well though.


.red-with-a-touch-of-blue {
  background: color-mix(in oklab, red 90%, blue);
}

.or-like-this {
  background: color-mix(in oklab, red, blue 10%);
}

Transparency with color-mix()

There are two ways that you can get values that are transparent. The first one is if you have a total that is less than 100%.


.semi-opaque {
  background: color-mix(in oklab, red 60%, blue 20%);
}

Whatever the total is, is what the alpha value will be set to, so for the code example above, the alpha value would be 80% (if the total is above 100%, the numbers are normalised to a total of 100).

We can also mix with transparent .


.thiry-percent-opacity-red {
  background: color-mix(in oklch, red 30%, transparent);
}

This works, but if that’s what I wanted to do, I’d probably use relative colours instead.

One thing we can use color-mix() for is banded gradient effects without having to figure out all the values along the way, which might be a bit of a niche use case, but it’s pretty handy how easy it is.

Some of this will be easier in the future

One of the problems with a lot of these new features is the repetition involved in using them. Luckily, custom functions are coming to CSS, which will help with this.


@function --lower-opacity(--color, --opacity) {
  result: oklch(from var(--color) l c h / var(--opacity);
} 

.lower-opacity-primary {
  background: --lower-opacity(var(--primary), .5); 
}


@function --shade-100(--color) returns <color> {
  result: hsl(from var(--color) calc(h - 12) calc(s + 15) 95%);
}
@function --shade-200(--color) returns <color> {
  result: hsl(from var(--color) calc(h - 10) calc(s + 12) 85%);
}
/* etc. */

.call-to-action {
  background: --shade-200(var(--accent));
}

.hero {
  background: --shade-800(var(--primary));
  color: --shade-100(var(--primary));
}

Things have changed a lot

While a lot of developers copy and paste values from design files, we can actually do more that what’s possible in most design apps, from colour mixing to relative colours, larger colour gamuts, and more.

While some of what we looked at does require some setup — once it’s in place, we can create very robust systems, and, along with the static world of design software, it does beg the question if more design should be done directly in the browser .

Enjoyed this article? You can support us by leaving a tip via Open Collective

Advert Mindful Design. Learn to design for real humans. Available now

Pop Goes the Population Count?

Hacker News
xania.org
2025-12-11 14:30:28
Comments...
Original Article

Written by me, proof-read by an LLM.
Details at end.

Who among us hasn’t looked at a number and wondered, “How many one bits are in there?” No? Just me then?

Actually, this “population count” operation can be pretty useful in some cases like data compression algorithms, cryptography, chess, error correction , and sparse matrix representations . How might one write some simple C to return the number of one bits in an unsigned 64 bit value?

One way might be to loop 64 times, checking each bit and adding one if set. Or, equivalently, shifting that bit down and adding it to a running count: sometimes the population count operation is referred to as a “horizontal add” as you’re adding all the 64 bits of the value together, horizontally. There are “divide and conquer” approaches too, see the amazing Stanford Bit Twiddling Hacks page for a big list.

My favourite way is to loop while the value is non-zero, and use a cute trick to “clear the bottom set bit”. The loop count is then the number of set bits. How do you clear the bottom set bit? You and a value with itself decremented!

value       : 11010100
subtract 1  : 11010011
& value     : 11010000

If you try some examples on paper, you’ll see that subtracting one always moves the bottom set bit down by one place, setting all the bits from there down. Everything else is left the same. Then when you and , the bottom set bit is guaranteed to be and -ed with zero, but everything else remains. Great stuff!

All right, let’s see what the compiler makes of this:

The core loop is pretty much what we’d expect, using the lea trick to get value - 1 , and ing and counting:

.L3:
  lea rax, [rdi-1]          ; rax = value - 1
  add edx, 1                ; ++result
  and rdi, rax              ; value &= value - 1
  jne .L3                   ; ...while (value)

Great stuff, but we can do better. By default gcc and clang both target some kind of “generic” processor which influences which instructions they can use. We’re compiling for Intel here, and gcc’s default is somewhere around Intel’s “nocona” architecture, from 2004. Unless you are running vintage hardware you can probably change it to something better. Let’s pick the super up-to-date “westmere” (from 2010…) using -march=westmere and see what happens 1 :

Wow! The entire routine has been replaced with a single instruction - popcnt rax, rdi 3 . When I first saw this optimisation I was blown away: the compiler recognises a relatively complex loop as being functionally equivalent to a single instruction. Both gcc and clang can do this, and within Compiler Explorer you can use the optimisation pipeline viewer in clang to see that clang’s “loop deletion pass” is responsible for this trick:

Screenshot of CE showing the opt pipeline viewer with the loop being replaced with a call to @llvm.ctpop.i64
Compiler Explorer's Opt Pipeline View

Compilers canonicalise code too, so some similar population count code will also be turned into a single instruction, though sadly not all. In this case, it’s probably better to actually use a standard C++ routine to guarantee the right instruction as well as reveal your intention: std::popcount . But even if you don’t, the compiler might just blow your mind with a single instruction anyway.

See the video that accompanies this post.


This post is day 11 of Advent of Compiler Optimisations 2025 , a 25-day series exploring how compilers transform our code.

This post was written by a human ( Matt Godbolt ) and reviewed and proof-read by LLMs and humans.

Support Compiler Explorer on Patreon or GitHub , or by buying CE products in the Compiler Explorer Shop .

Posted at 06:00:00 CST on 11 th December 2025.

What a Century-Old Press Service Teaches Us About Building Worker Power

Portside
portside.org
2025-12-11 14:23:37
What a Century-Old Press Service Teaches Us About Building Worker Power Kurt Stand Thu, 12/11/2025 - 09:23 ...
Original Article

This won’t come as a surprise to union activists, but the mainstream press doesn’t always fairly represent the labor movement. That was true in 1919, the year the Federated Press (FP) was founded, and it remains true today.

The FP was created to counteract the anti-labor bias in the mainstream press during the post-World War I strike wave. At the first convention of the Farmer-Labor Party, a coalition of labor activists, editors, and socialists hatched the idea for a cooperative, labor-oriented press service that would provide national and international news to subscribing labor newspapers. Imagine the Associated Press, but written by and for labor.

For more than three decades, from 1919 to 1956, the FP sent daily news sheets to hundreds of labor newspapers. At its peak in the early 1940s, FP journalists reached millions of American workers with feature stories, regular columns, cartoons, and briefs covering strikes and union news, national and international politics, employer tactics, and suggestions for how workers could act to shape the conditions that affected their lives.

But the FP was more than just news. It represented the vibrancy of American labor and working-class culture in the mid-20th century. One hundred years ago, there were 629 labor newspapers in print. Today, there are about 100.

From the Women's Pages to the Front Page

In my new book Labor Journalism, Labor Feminism: Women at the Federated Press, I argue that labor journalists and newspapers played a critical role in bringing working women’s demands to the fore. To explain the growth of labor feminism, most historians have focused on individual women, such as Caroline Davis of the United Auto Workers (UAW), or organizations like the Women’s Bureau in the U.S. Department of Labor. But the importance of labor newspapers should not be overlooked in understanding how the demands of labor feminism went mainstream.

World War II marked a turning point for both women workers and women journalists. The FP’s women journalists had been publicizing union women’s demands for equal pay since the 1920s. Women’s employment swelled during World War II, but not just in defense plants. Women journalists moved from the women’s pages to the front pages, bringing working women’s problems with them.

1024px Womens Bureau29493r

National Photo Company, Public domain, via Wikimedia Commons

What Can We Learn From the Federated Press?

While the book focuses on the FP’s women journalists to tell this story, it is also a history of the FP itself, one that may have some lessons for labor today. Two things stand out to a 21st century reader of the FP’s news sheets.

First, many of the political and economic battles  that were waged a century ago are still being fought today. In the 1920s, FP economic columnist Laurence Todd pointed out how tariffs would negatively impact working people. A year before the 1929 crash, Esther Lowell reported on rising unemployment, strikes, and soup lines in New England textile towns. In the early 1940s, Washington D.C. bureau chief Virginia Gardner warned that conservatives were discarding party identification, and uniting in a new coalition of “poll taxers plus Hoover Republicans and reactionary Northerners” to dismantle the New Deal. In the 1930s and 1940s, Julia Ruuttila closely followed the federal government’s efforts to deport political leftists and labor activists. From the 1920s through the 1950s, big business and political conservatives used the specter of communism to discredit labor’s demands, civil rights, and liberal political programs. The FP’s news sheets provide a road map of how we came to our current political moment and a historical record of how ordinary people organized to resist political and economic repression through the labor movement.

There are many reasons why American labor succeeded in the mid-20th century, but foremost among them was unity. One of the defining characteristics of the FP was its editorial policy of nonpartisanship within the labor movement. Carl Haessler, the FP’s managing editor for almost its entire existence, insisted that all organized labor - the American Federation of Labor’s (AFL) craft unions, the Congress of Industrial Organizations’s (CIO) industrial unions, and independent unions - be accorded fair and equal treatment. The FP had no favorites within the labor movement, and it was criticized from both the left and the right for it. Conservatives within the AFL argued that by giving any voice to the political left, the FP weakened the labor movement. At the same time, leftists in the CIO argued that giving anticommunist conservatives equal voice weakened the labor movement. But Haessler stuck by its nonpartisanship policy until 1955. The ability to maintain unity and diversity of opinion was crucial to the success and longevity of the FP.

How can organized labor work toward a program that unifies America’s workers under a shared program of action while allowing for diverse political perspectives? Given the current political and economic crisis, organizing the unorganized millions of Americans seems to be the only realistic path forward. The men and women who organized in the 1920s, faced with a hostile federal government, powerful business interests, and no legal power, but unified in their shared goal of building power for working people, offer one model.

1024px Chairman Dies of House Committee Investigating Un American Activities

Harris & Ewing, official White House photographers, Public domain, via Wikimedia Commons

The second lesson is about persistence in the face of political repression. FP journalists Julia Ruuttila, Virginia Gardner, Mim Kelber, Harvey O’Connor, Carl Haessler, and others were called before the House Un-American Activities Committee (HUAC) and accused of being communists. Harvey and Jessie O’Connor had their passports revoked. But each of them remained focused on their work and committed to their political ideals through very dark times. They would never, at the height of McCarthyism in the 1940s and 1950s, have predicted the liberal resurgence of the sixties. The last chapter of Labor Journalism, Labor Feminism follows the four women journalists featured in the book after the FP closed in 1956. Julia Ruuttila remained a fixture of labor activism in Portland, Oregon into the 1980s, and became a mentor to younger women activists. Mim Kelber linked the labor feminism of the Old Left to the mainstream feminism of the New Left in the 1960s, and remained active in environmental causes until her death in 2004. All of them continued to mentor younger activists, including Ray Mungo and Marshall Bloom, who in 1967 launched the Liberation News Service, a press agency for underground, alternative, and radical newspapers. For labor activists facing similar challenges today, their stories offer a glimmer of hope: political winds shift in ways we cannot predict, but the work of organizing endures.

Lessons for a Movement in Crisis

As workers today face a decades-long decline in union density, rising inequality, and a hostile media landscape, the FP's century-old experiment in independent labor journalism feels urgent again. Labor News, Labor Feminism, a history of a long-forgotten press agency, speaks to our current moment and the pressing need to re-energize America’s labor movement.

Labor Journalism, Labor Feminism: Women at the Federated Press will be available in 2026 through the University of Illinois Press.

Victoria Grieve is a Professor of History, teaching courses in modern U.S. history and visual culture. A native of Philadelphia, Dr. Grieve specializes in visual culture, the history of childhood, and the Cold War. Her publications include Little Cold Warriors: American Childhood in the 1950s (2018) and The Federal Art Project and the Creation of Middle Culture (2009). She is currently working on a book about labor feminism and the radical press during the Cold War.

About Power At Work: Sustained and effective worker power arises out of collective action. By building self-funding, democratic organizations, America’s workers can confront and influence powerful forces, including employers, Wall Street, and the government. Our goal at Power At Work is to contribute to a discourse in the United States that emphasizes the importance of collective action and puts workers and worker power at the center of that conversation.

The architecture of “not bad”: Decoding the Chinese source code of the void

Hacker News
suggger.substack.com
2025-12-11 14:21:14
Comments...
Original Article

🔴

In Episode 03 of my psychological thriller, Script in the Audience , there is a trivial moment where a character makes a correct deduction.

In English, I wrote: “He’d guessed right.” Simple. Direct. Boolean value = True.

But getting to that “True” value required wading through a surprising number of error messages. It felt absurd, yet it was hard-won.

The original Chinese sentence was simple: “他没猜错。”

Literally: He didn’t guess wrong.

I tried every variation:

  • “He wasn’t wrong” (Sounds like he’s arguing with someone).

  • “He didn’t guess incorrectly” (Sounds like a robot hoping to pass the Turing test).

  • “He wasn’t mistaken” (Too formal, like a manager auditing a subordinate’s work).

None of them felt right.

Then I realized: that’s not how English behaves.

English would say: “He was right.” Or “He guessed correctly.”

Direct. Affirmative. Landed.

Right is right, wrong is wrong. You don’t say ‘not wrong.’

I sat there thinking: He clearly guessed correctly, so why is my instinct to say “he didn’t guess wrong”?

Then it hit me: My native Operating System (Chinese) does not like to return a direct True . It prefers !False .

Chinese and English don’t just have different words for the same reality. They construct different realities entirely.

In Chinese, affirmation is often compiled through negation:

  • 没错 (méi cuò) = “not wrong” = Right

  • 不差 (bù chà) = “not bad” = Decent

  • 还行 (hái xíng) = “still passable” = Okay

  • 没事 (méi shì) = “no problem” = It’s fine

In English, this feels bizarre. If something is good, you say:

  • Nice

  • Great

  • Perfect

  • Brilliant

You name the quality directly. You point at it. You own it.

In English, affirmation is an act of Attribute Assignment .

When you say “That’s a great idea,” you are tagging an object with a positive value. You are taking a stance. You are making a commitment.

Negative Affirmation corresponds to the “Void” (无) in a high-context culture.

It maintains ambiguity, creates room for maneuvering, and keeps responsibility elastic.

Direct Affirmation corresponds to “Presence” (有) in a low-context culture.

It demands a clear attitude, rapid categorization, and the assumption of a stance.

Language itself is political; it forms a feedback loop that shapes both individual cognition and social order.

Negative Word + Negative Word = Ambiguous Affirmation.

This structure is essentially “Tone Dampening.” The negation here serves a function of tonal regulation rather than semantic reversal.

Ambiguous affirmation is an act of responsibility avoidance. When I say something is “not bad” (bù cuò), I am deploying a linguistic strategy of Retractable Design .

It engineers interpretative flexibility and carves out a space for plausible deniability.

This strategy is defensible when retreating and effective when attacking.

  • To an optimist, I have expressed approval.

  • To a pessimist, I have merely confirmed the absence of failure.

  • To myself, I have retained a backdoor.

If the thing turns out to be a disaster later, I can safely say: “I only said it wasn’t bad; I never promised it was perfect.”

This is the philosophy of the “Void” (无).

It is the art of the “Minimum Necessary Investment.” It prioritizes maneuverability over accuracy. The retraction cost is extremely low, and there is no pressure to maintain logical consistency.

This is what linguists call a “High Context” strategy: meaning exists in the context surrounding the words, not in the words themselves.

How do ambiguity and “leaving blank space” (留白) function as communication strategies?

  • Ambiguity = Maintaining multiple exits.

  • Leaving blank space = Keeping the right of interpretation in one’s own hands.

  • Negative Affirmation is the linguistic organ of ambiguity.

But this murky ambiguity is also a psychological defense mechanism. You haven’t said anything wrong , but you haven’t said everything either. Language becomes a form of psychological armor.

When words themselves lose specific meaning, the ambiguity of grammar takes over: it accommodates emotional uncertainty, knowledge uncertainty, and relational uncertainty.

The function of this language is not to express facts, but to maintain relationships and positions. Using negative affirmation makes one’s stance fluid, the process elastic, and the outcome uncertain.

It is the “Void”—reading the air. Speaking, yet saying nothing.

I worked in branding for eight years, and I faced this cognitive dissonance every day. Language—or rather, the subtext beneath the words—becomes crystal clear if you look closely.

The English Market sells the “Entity.”

It assumes the consumer is a rational adult seeking utility. The copy sells the presence of a benefit: “Amazing flavor,” “Perfect balance,” “Brilliant deal.”

It demands a clear definition of what is good .

The Chinese Market sells the “Void.”

It assumes the world (and people) are inherently risky. Therefore, the highest value is the absence of harm .

Look at the labels: “0 Sugar,” “0 Fat,” “Non-greasy,” “Non-irritating,” “No burden.”

In the West, “Good” means the addition of value.

In the East, “Good” means the successful elimination of risk.

This is why writing Script in the Audience is such a schizophrenic experience for me. I am toggling between two incompatible rendering engines.

Here’s the uncomfortable part: these linguistic habits train the brain.

I grew up speaking Chinese, so my default mode becomes:

  • Grayscale Thinking: Good and bad are endpoints of a spectrum; most things live in between.

  • Contextual Judgment: Whether something is bù cuò (not bad) or hái xíng (still passable) depends on who is asking and why.

  • Responsibility Diffusion: You learn to participate without pinning yourself down.

Chinese trains the brain for Spectrum Analysis . It sees the “Gray Scale.” Because there is a vast interval between “good” and “bad,” it accommodates complex relationships.

But at the same time, it can breed extremism and ignorance because of its vagueness, inefficiency, and dilution of responsibility.

It is relatively closed system—not everyone in a “high context” culture can actually decode that context; classes are automatically divided by their ability to read the air.Truth is not a fixed point, but a sliding variable dependent on the observer. It creates a reality that is terrifyingly ambiguous.

English trains the brain for Categorization . It sorts the world into bins: Positive / Neutral / Negative. It is efficient, high-speed, and low-latency.

But it is also “naked.” Every sentence is a small public exposure of your judgment.

These two languages are constantly shaping two different models of reality, molding the way people think.

If I hadn’t compared them, I might never have realized this.

I would have simply thought: “This is how reality is.”

This difference is one of the sources of horror in Script in the Audience .

Wider. Freer. Suggger

Debug Log:

Even as I type these words, my underlying OS is screaming at me to delete them: “Direct affirmation demands a public persona that bears responsibility. Every direct ‘Yes’ is a tiny act of self-exposure.”

My experience warns me, too: “Public discourse is for agendas and posturing. Only a fool tries to share genuine observations or philosophy.”

God, publishing this feels like streaking.

Might as well leave the lights on. I’ve set it to auto-publish.

I’m going to pour a whiskey and peel an orange.🍊

See you on the other side.

Discussion about this post

Ready for more?

Scientists Discover the Earliest Human-Made Fire, Rewriting Evolutionary History

403 Media
www.404media.co
2025-12-11 14:17:04
The discovery of fire-cracked handaxes and sparking tools in southern Britain pushes the timeline of controlled fires back 350,000 years....
Original Article

🌘

Subscribe to 404 Media to get The Abstract , our newsletter about the most exciting and mind-boggling science news and studies of the week.

Humans made fires as early as 400,000 years ago, pushing the timeline of this crucial human innovation back a staggering 350,000 years, reports a study published on Wednesday in Nature .

Mastery of fire is one of the most significant milestones in our evolutionary history, enabling early humans to cook nutritious food, seek protection from predators, and establish comfortable spaces for social gatherings. The ability to make fires is completely unique to the Homo genus that includes modern humans ( Homo sapiens ) and extinct humans, including Neanderthals.

Early humans may have opportunistically exploited wildfires more than one million years ago, but the oldest known controlled fires, which were intentionally lit with specialized tools, were previously dated back to about 50,000 years ago at Neanderthal sites in France.

Now, archaeologists have unearthed the remains of campfires ignited by an unidentified group of humans 400,000 years ago at Barnham, a village near the southern coast of the United Kingdom.

“This is a 400,000-year-old site where we have the earliest evidence of making fire—not just in Britain or Europe, but in fact, anywhere else in the world,” said Nick Ashton, an archaeologist at the British Museum who co-authored the study, in a press briefing held on Tuesday.

“Many of the great turning points in human development, and the development of our civilization, depended on fire,” added co-author Rob Davis, also an archaeologist at the British Museum. “We're a species who have used fire to really shape the world around us—in belief systems, as well. It's a very prominent part of belief systems across the world.”

Artifacts have been recovered from Barnham for more than a century, but the remnants of this ancient hearth were identified within the past decade. The researchers were initially tipped off by the remains of heated clay sediments, hydrocarbons associated with fire, and fire-cracked flint handaxes.

But the real smoking gun was the discovery of two small fragments of iron pyrite, a mineral commonly used to strike flint to produce sparks at later prehistoric campfires such as the French Neanderthal sites.

Discovery of the first fragment of iron pyrite in 2017 at Barnham, Suffolk Image: Jordan Mansfield, Pathways to Ancient Britain Project.

“Iron pyrite is a naturally occurring mineral, but through geological work in the area over the last 36 years, looking at 26 sites, we argue that pyrite is incredibly rare in the area,” said Ashton. “We think humans brought pyrite to the site with the intention of making fire.”

The fire-starters were probably Neanderthals, who were known to be present in the region at the time thanks to a skull found in Swanscombe, about 80 miles northeast of Barnham. But it’s possible that the fires were made by another human lineage such as Homo heidelbergensis , which also left bones in the U.K. around the same period. It was not Homo sapiens as our lineage emerged in Africa later, about 300,000 years ago.

Regardless of this group’s identity, its ability to make fire would have been a major advantage, especially in the relatively cold environment of southern Britain at the time. It also hints that the ability to make fire extends far deeper into the past than previously known.

“We assume that the people who made the fire at Barnham brought the knowledge with them from continental Europe,” said co-author Chris Stringer, a physical anthropologist at the Natural History Museum. “There was a land bridge there. There had been a major cold stage about 450,000 years ago, which had probably wiped out everyone in Britain. Britain had to be repopulated all over again.”

“Having that use of fire, which they must have brought with them when they came into Britain, would have helped them colonize this new area and move a bit further north to places where the winters are going to be colder,” he continued. “You can keep warm. You can keep wild animals away. You get more nutrition from your food.”

Excavation of the ancient campfire, removing diagonally opposed quadrants. The reddened sediment between band B’ is heated clay. Image: Jordan Mansfield, Pathways to Ancient Britain Project.

Although these humans likely had brains close in size to our own, the innovation of controlled fire would have amplified their cognitive development, social bonds, and symbolic capacities. In the flickering light of ancient campfires, these humans shared food, protection, and company, passing on a tradition that fundamentally reshaped our evolutionary trajectory.

“People were sitting around the fires, sharing information, having extra time beyond pure daylight to make things, to teach things, to communicate with each other, to tell stories,” Stringer said. “Maybe it may have even fueled the development of language.”

“We've got this crucial aspect in human evolution, and we can put a marker down that it was there 400,000 years ago,” he concluded.

Disney making $1B investment in OpenAI, will allow characters on Sora AI

Hacker News
www.cnbc.com
2025-12-11 14:12:14
Comments...
Original Article

Disney and OpenAI reach three-year licensing agreement

The Walt Disney Company on Thursday announced it will make a $1 billion equity investment in OpenAI and will allow users to make videos with its copyrighted characters on its Sora app.

OpenAI launched Sora in September, and it allows users to create short videos by simply typing in a prompt.

As part of the startup's new three-year licensing agreement with Disney, Sora users will be able make content with more than 200 characters across Disney, Marvel, Pixar and Star Wars starting next year.

"The rapid advancement of artificial intelligence marks an important moment for our industry, and through this collaboration with OpenAI we will thoughtfully and responsibly extend the reach of our storytelling through generative AI, while respecting and protecting creators and their works," Disney CEO Bob Iger said in a statement.

Tune in at 10:30 a.m. ET as Disney CEO Bob Iger and OpenAI CEO Sam Altman joins CNBC TV to discuss the media giant's investment. Watch in real time on CNBC+ or the CNBC Pro stream.

As part of the agreement, Disney said it will receive warrants to purchase additional equity and will become a major OpenAI customer.

Disney is deploying OpenAI's chatbot ChatGPT to its employees and will work with its technology to build new tools and experiences, according to a release.

When Sora launched this fall, the app rocketed to the top of Apple's App Store and generated a storm of controversy as users flooded the platform with videos of popular brands and characters.

The Motion Picture Association said in October that OpenAI needed to take "immediate and decisive action" to prevent copyright infringement on Sora.

OpenAI CEO Sam Altman said more "granular control" over character generation was coming, according to a blog post following the launch.

As AI startups have rapidly changed the way that people can interact with content online, media companies, including Disney, have kicked off a series of fresh legal battles to try and protect their intellectual property.

Disney sent a cease and desist letter to Google late on Wednesday alleging the company infringed its copyrights on a "massive scale." In the letter, which was viewed by CNBC, Disney said Google has been using its copyrighted works to train models and distributing copies of its protected content without authorization.

Universal and Disney have sued the AI image creator Midjourney , alleging that the company improperly used and distributed AI-generated characters from their movies. Disney also sent a cease and desist letter to Character.AI in September, warning the startup to stop using its copyrighted characters without authorization.

Disney's deal with OpenAI suggests the company isn't ruling out AI platforms entirely.

The companies said they have affirmed a commitment to the use of AI that "protects user safety and the rights of creators" and "respects the creative industries," according to the release.

OpenAI has also agreed to maintain "robust controls" to prevent illegal or harmful content from being generated on its platforms.

Some of the characters available through the deal include Mickey Mouse, Ariel, Cinderella, Iron Man and Darth Vader. Disney and OpenAI said the agreement does not include any talent likeness or voices.

Users will also be able to draw from the same intellectual property while using ChatGPT Images, where they can use natural language prompts to create images.

"Disney is the global gold standard for storytelling, and we're excited to partner to allow Sora and ChatGPT Images to expand the way people create and experience great content," Altman said in a statement.

Curated selections of Sora videos will also be available to watch on Disney's streaming platform Disney+.

WATCH: We tested OpenAI’s Sora 2 AI-video app to find out why Hollywood is worried

We tested OpenAI’s Sora 2 AI-video app to find out why Hollywood is worried

Security updates for Thursday

Linux Weekly News
lwn.net
2025-12-11 14:10:59
Security updates have been issued by Debian (ffmpeg, firefox-esr, libsndfile, and rear), Fedora (httpd, perl-CGI-Simple, and tinyproxy), Oracle (firefox, kernel, libsoup, mysql8.4, tigervnc, tomcat, tomcat9, and uek-kernel), SUSE (alloy, curl, dovecot24, fontforge, glib2, himmelblau, java-17-openjdk...
Original Article
Dist. ID Release Package Date
Debian DSA-6079-1 stable ffmpeg 2025-12-10
Debian DLA-4401-1 LTS firefox-esr 2025-12-11
Debian DSA-6078-1 stable firefox-esr 2025-12-10
Debian DLA-4402-1 LTS libsndfile 2025-12-11
Debian DLA-4400-1 LTS rear 2025-12-10
Fedora FEDORA-2025-9621c19da8 F43 httpd 2025-12-11
Fedora FEDORA-2025-47551b2aa2 F42 perl-CGI-Simple 2025-12-11
Fedora FEDORA-2025-3dd97ed203 F43 perl-CGI-Simple 2025-12-11
Fedora FEDORA-2025-a177cf4e1e F42 tinyproxy 2025-12-11
Oracle ELSA-2025-23035 OL10 firefox 2025-12-10
Oracle ELSA-2025-23034 OL9 firefox 2025-12-10
Oracle ELSA-2025-22854 OL10 kernel 2025-12-10
Oracle ELSA-2025-22865 OL9 kernel 2025-12-10
Oracle ELSA-2025-28040 OL9 kernel 2025-12-10
Oracle ELSA-2025-21657 OL7 libsoup 2025-12-10
Oracle ELSA-2025-23008 OL10 mysql8.4 2025-12-10
Oracle ELSA-2025-22096 OL7 tigervnc 2025-12-10
Oracle ELSA-2025-23048 OL8 tomcat 2025-12-10
Oracle ELSA-2025-23049 OL9 tomcat 2025-12-10
Oracle ELSA-2025-23052 OL10 tomcat9 2025-12-10
Oracle ELSA-2025-28040 uek-kernel 2025-12-10
SUSE SUSE-SU-2025:21137-1 SLE16 alloy 2025-12-10
SUSE SUSE-SU-2025:21145-1 SLE16 curl 2025-12-10
SUSE SUSE-SU-2025:21159-1 SLE16 dovecot24 2025-12-10
SUSE SUSE-SU-2025:4353-1 SLE15 oS15.6 fontforge 2025-12-10
SUSE SUSE-SU-2025:4347-1 SLE-m5.3 SLE-m5.4 SLE-m5.5 oS15.4 glib2 2025-12-10
SUSE SUSE-SU-2025:21158-1 SLE16 himmelblau 2025-12-10
SUSE SUSE-SU-2025:21164-1 SLE16 java-17-openjdk 2025-12-10
SUSE SUSE-SU-2025:21162-1 SLE16 java-21-openjdk 2025-12-10
SUSE SUSE-SU-2025:21180-1 SLE16 kernel 2025-12-10
SUSE SUSE-SU-2025:21147-1 SLE16 kernel 2025-12-10
SUSE SUSE-SU-2025:21179-1 SLE16 SLE-m6.2 oS16.0 kernel 2025-12-10
SUSE SUSE-SU-2025:21139-1 SLE16 SLE-m6.2 oS16.0 kernel 2025-12-10
SUSE openSUSE-SU-2025:15803-1 TW krb5 2025-12-10
SUSE SUSE-SU-2025:21140-1 SLE16 lasso 2025-12-10
SUSE SUSE-SU-2025:21150-1 SLE16 libvirt 2025-12-10
SUSE SUSE-SU-2025:21170-1 SLE16 mozjs128 2025-12-10
SUSE SUSE-SU-2025:21144-1 SLE16 mysql-connector-java 2025-12-10
SUSE openSUSE-SU-2025:15804-1 TW nvidia-open-driver-G07-signed-check 2025-12-10
SUSE SUSE-SU-2025:21161-1 SLE16 openssh 2025-12-10
SUSE SUSE-SU-2025:21132-1 SLE16 poppler 2025-12-10
SUSE SUSE-SU-2025:4364-1 MP4.3 SLE15 SES7.1 oS15.3 oS15.4 oS15.5 postgresql17, postgresql18 2025-12-11
SUSE SUSE-SU-2025:4363-1 SLE15 oS15.6 postgresql17, postgresql18 2025-12-11
SUSE openSUSE-SU-2025:0465-1 osB15 python-Django 2025-12-10
SUSE SUSE-SU-2025:21168-1 SLE16 python-cbor2 2025-12-10
SUSE SUSE-SU-2025:4352-1 oS15.4 oS15.6 python310 2025-12-10
SUSE openSUSE-SU-2025:15805-1 TW python311-Django 2025-12-10
SUSE SUSE-SU-2025:21136-1 SLE16 runc 2025-12-10
SUSE SUSE-SU-2025:21167-1 SLE16 strongswan 2025-12-10
SUSE SUSE-SU-2025:21152-1 SLE16 tomcat11 2025-12-10
SUSE SUSE-SU-2025:21149-1 SLE16 xwayland 2025-12-10
Ubuntu USN-7919-1 14.04 16.04 18.04 20.04 22.04 24.04 25.04 25.10 binutils 2025-12-10
Ubuntu USN-7924-1 16.04 18.04 20.04 22.04 24.04 25.04 25.10 libpng1.6 2025-12-11
Ubuntu USN-7922-1 18.04 20.04 linux, linux-aws, linux-aws-5.4, linux-gcp, linux-gcp-5.4, linux-hwe-5.4, linux-ibm, linux-ibm-5.4, linux-kvm, linux-oracle, linux-xilinx-zynqmp 2025-12-10
Ubuntu USN-7921-1 24.04 25.04 linux, linux-aws, linux-aws-6.14, linux-gcp, linux-hwe-6.14, linux-raspi 2025-12-10
Ubuntu USN-7920-1 25.10 linux, linux-aws, linux-gcp, linux-realtime 2025-12-10
Ubuntu USN-7923-1 20.04 22.04 qtbase-opensource-src 2025-12-11

The Walt Disney Company and OpenAI Partner on Sora

Hacker News
openai.com
2025-12-11 14:05:16
Comments...

AI optimism is a class privilege

Lobsters
joshcollinsworth.com
2025-12-11 14:02:50
Comments...
Original Article

A while back, in a slightly earlier era of AI 1 , a project was making the rounds which would read your GitHub profile, and create a personalized roast based on the contents.

It was intended, I assume, as a harmless, lighthearted novelty. I wanted to join in on the fun, so I put my profile in and tried it out.

I didn’t laugh at my roast.

It wasn’t clever, or funny, or even particularly unexpected. A tech-savvy stranger on Fiverr probably could’ve done better.

But more than that: I remember being surprised at how mean it was. Little of what the model produced even felt like a joke; instead, it just read as a slew of very personal insults.

And then I remember being surprised that the artificial cruelty actually affected me.

Despite knowing this was all a soulless (and as it turns out, humorless) machine making a poor attempt at comedy—one that nobody else even saw!—reading those words hurt. Bizarrely, I suppose, AI actually managed to hurt me.

And that was the first time I remember thinking about what AI was going to do to my children .

If I—a grown man with thick skin, hardened by decades of internet usage—can still be susceptible to highly personalized online bullying, what will it be like for my son, when some mean kid inevitably gets their hands on this technology and decides to put it to malicious use?

By the time my kids encounter real bullying, I’m sure derogatory jokes will be about the least harmful form of antagonism AI will be empowering. Imagine the damage one bad kid could cause using deepfakes, for example. Forget the days of starting a nasty rumor and spreading it around the school; now you can share a video of it happening.

Imagine the shame, intimidation, harassment, and trauma AI might enable a cruel juvenile to inflict— particularly once the tech has had another few years to improve. (To say nothing, of course, of what it might enable for an unethical adult.)

Imagine how absolutely unmitigable the damage would be.

My reaction wasn’t laughter; my reaction was horror at the realization that we’re racing to build the perfect bullying tool.

I was never exactly an optimist when it comes to AI. But that was the first time I realized exactly how dark the future I foresaw actually was.


Although it’s not an entirely correct description, I’ll use the term “AI optimist” a lot in this post, as it’s at least a serviceable label for a general group of people.

That group, to be a bit more descriptive, is made up of people who are excited about AI. This might include future developments, but they’re particularly excited about AI in the present and near term, and how they can use it right now. You might call them enthusiasts, or even believers, maybe. But in any case, they’re generally enthusiastic about AI, and aren’t overly concerned with costs or downsides.

You almost certainly know at least one or two of these people. Maybe you even are one. (If so: I’m not naive enough to think I’ll change your mind with this post, but I hope I’ll at least give you some things to think about.)

It seems to me that to be in this group—to regard AI, as it exists currently, with optimism and enthusiasm—requires at least a certain degree of privilege. Hence, the somewhat blunted title of this post.

I had long struggled to put the thought into words. But once it crystallized into this post’s titular sentence, I felt as though a great deal around me suddenly shifted into perspective.

So, that’s why I wrote this post; to share that perspective. It is my own, and it comes from my own experiences (and yes, through the lens of my own substantial privileges, class and otherwise). You can take it, or not, as you like.


It’s late 2025, and so you don’t need me to tell you how extreme opposing views on AI can be. Everyone has an opinion of AI, and the overwhelming majority fall to one far end of the spectrum or the other. There’s a vast divide between the sides, each fiercely passionate in their own entirely opposite ways.

For my part, I’m decidedly on the pessimist side of the chasm, for many reasons. Some I’ll get into here; others, I’ll mostly pass over, as they’ve been well covered elsewhere.

But for now, suffice to say: when I look around me at the impact AI is currently having, I see little reason for enthusiasm—let alone the little-questioned, quasi-religious belief that this fundamentally flawed technology might one day soon bring about some sort of economic, societal, and/or scientific revolution all on its own.

Come to think of it, “religious” might be a good word to describe how AI optimism feels, from the outside. It has fervent believers, prophecies from prominent figures to be taken on faith, and—of course, as with any religion—a central object of worship which can at all times be offered as The Answer, no matter what the question might happen to be.

In fairness: that’s not all AI optimists. I’m mostly describing the extreme ones.

Even among the more moderate optimists, though—ordinary people who just like the tech—the enthusiasm has always seemed…disproportionate, let’s say.

It was always perplexing to me that so many of my peers seemed so eager to be across the divide from me; that they were so much more impressed with AI than I was, and so indifferent to what I felt were alarming flaws and drawbacks.

They didn’t seem particularly different than me. In fact, many were my friends, connections, and people I looked up to.

We were looking at the same tech, with the same outcomes, and drawing entirely different conclusions. What was I missing?

The answer eventually hit me:

They see themselves as the ones benefiting from AI, and not as the ones it might cost.


I concede AI can occasionally be helpful for certain tasks, and I can understand the enthusiasm, as far as that goes. I don’t use it often, but admittedly some. (I do still write every word of every post on my own, however, hand-typed em dashes and all.)

I sometimes find AI helpful for generating reference images to use as starting points for illustrations, and occasionally for ideating, as a “rubber duck” of sorts. I also use it once in a while to compensate for my own color vision deficiency. But mostly, it helps me with code.

In full disclosure of all the mundane details: I mostly only use code completion suggestions in VS Code, even though they’re often hit and miss. I rarely use chat mode, and when I do, it tends to be mostly for rote tasks like format conversion or pattern matching. That’s pretty much it. Every time I’ve tried giving AI more responsibility than that, it’s let me down pretty spectacularly.

I’m deeply skeptical that AI offers a net productivity boost in general 2 , but particularly that it’s capable of high-quality frontend code. I theorize good frontend is just too subjective, too visual, balances too many concerns, and is too underrepresented in training data. (That might explain why developers in other specialties seem to report better results.)

I can already hear the enthusiasts scoffing and getting up to leave, because I don’t use AI “the right way,” by vibe-coding with Cursor agentic MCP, or whatever the flavor of the week is. And it’s true; I’ve never gone that deep with it.

That’s partly because I’ve heard too many horror stories about leaked secrets, deleted databases, and wiped hard drives. I don’t like the idea of giving a non-deterministic black box full control of my machine and/or production.

But it’s also because I like using my brain. Any passion I have for what I do comes largely from the process of ideating, building, and creatively solving a problem. Having a machine do all that for me and skipping to the result is as unsatisfying as a book full of already-completed sudoku puzzles, or loading up a save file where somebody else already played the first two thirds of a video game. I don’t do these things just because I want the result ; I also do them because I want the experience .

I want to improve! And it’s hard to imagine how that might happen if I’m not actually putting skills into practice.

All of that’s mostly beside the point anyway, though; my issues with AI have little to do with its level of effectiveness.


Even if my new coding buddy is severely prone to overconfidence, it’s still admittedly exciting when it makes tasks that would’ve been previously time-consuming and/or challenging quick and easy.

In order to be an AI optimist about this, however: that’s where I would have to stop thinking about it.

I would be forced to ignore what else my little coding buddy is getting up to when I’m not looking; the other impacts he’s having on other people’s lives.

Let’s take layoffs as an example.

In order to be an AI optimist, it seems to me you’d have to believe yours is not among the jobs at risk of being automated or downsized, and that you aren’t among the countless workers staring down displacement. (Or at least: not at risk of AI taking over the interesting and fulfilling parts of your work, as your role is reduced to acting as its manager.) After all, how could you feel enthusiastic about a threat to your own livelihood? 3

You’d need to be high enough in the org chart; far enough up the pyramid; advanced enough along the career ladder.

To be an AI optimist, I’m guessing you must not be worried about where your next job might come from, or whether you can even find one. The current dire state of the job market, I have to assume, doesn’t scare you. You must feel secure.

Maybe it’s because you’ve already made a name for yourself. Maybe you’re known at conferences, or on podcasts. Maybe you’re just senior enough that your résumé opens doors for you.

Or maybe you’ve been promoted into leadership. Maybe you spend your days in important meetings.

Maybe this is all a lot easier to be optimistic about with the right charts and graphs in front of you.

You almost certainly aren’t a junior, though, or an intern, or somebody trying to break into the field. You must not be near the rising tide engulfing entry-level workers across my industry and a wide range of others. Because, infamously, nobody is hiring juniors anymore. 4

It seems fairly safe to assume you aren’t in the first group against the wall, if you’re excited about the thing putting them there.

You probably aren’t a contractor, either, or working at a consultancy. And for that matter: you almost certainly aren’t an artist, or illustrator, or writer. You probably haven’t watched client dollars funnelled upwards, with the bitter knowledge that this thing eroding your income is only possible because it brazenly plagiarized you and a million other people who do what you do.

AI optimism probably means you’re in a position where nobody is stealing your work, or bulldozing your entire career field.

That’s the thing about being bullish on AI: to focus on its benefits to you, you’re forced to ignore its costs to others.

AI optimism requires believing that you (and your loved ones) are not among those who will be driven to psychosis, to violence, or even to suicide by LLM usage. At the very least, this means you feel secure in your own mental health; likely, it also means you have a wider and more substantial support system propping up your wellbeing.

(Not to put too fine a point on it, but: those things are otherwise known as privileges.)

AI optimism requires you to believe that, whoever will be impacted by the sprawling data centers, the massive electricity demands, the water consumption, and the other environmental hazards of the AI boom 5 , it won’t be you. Whatever disaster might happen, your neighborhood will be safe from it. Probably far away from it.


The harms of AI aren’t a standalone issue; as AI becomes a part of other technologies, systems, and parts of society, it’s exacerbating their existing problems, and accelerating damage already being done elsewhere.

I have to believe scammers are enthusiastic about AI; there’s likely never been a more helpful tool for fraud. Criminals and con artists have always been around, of course, but they’ve never had such powerful instruments at their disposal. After all, it’s much easier to rob somebody’s unsuspecting grandma when you can simply conjure a video of that person out of thin air, or perfectly imitate their voice on a phone call. 6

But that’s a relatively small scale of harms, aimed at individuals. The broader harms come from AI interacting with systems , like governments and their substructures.

Malicious state actors (both in and outside of the US) are wielding AI as a ruthlessly efficient propaganda machine, disseminating disinformation that’s more convincing than ever, faster than ever previously possible. Much of what’s being produced serves to dehumanize and victimize vulnerable groups, like immigrants, refugees, queer people, and political dissidents. Mainly (but not exclusively), this is to bolster authoritarian power.

It’s hard to imagine how one could be optimistic about the technology empowering such horrors, but I suppose knowing it probably won’t affect you must help.

I doubt I could feel very good about the tech helping me write emails faster if I knew that same tech was helping to make me, or people close to me, a target of violence.

Even when the intent might be good, however, AI often amplifies existing harms.

In the rush to shove AI into everything possible, we’ve now injected it into parts of the justice system, too. It’s in everything from facial recognition and surveillance tech to data and administrative work. It’s even in the legal system.

In theory, this is an efficiency boost. In theory , a machine should be less biased than humans.

In reality, not only do these models make mistakes at a rate that is utterly unacceptable in this context; they mimic and amplify the inherent racism present in their own training data. (Tech is always a mirror of its creators; it is never neutral.) Compounding this problem, AI is non-deterministic, and something of a black box, offering little to no way to inspect, challenge, or appeal its results.

Needless to say, this deployment of AI has already had profoundly devastating impact on real people’s lives—damage which shows no signs of slowing.

Forgive me, but I can’t imagine being excited that this technology which is rapidly accelerating inequality is also helping me save a little time on writing code.

I have to imagine such excitement would require me to think none of this could happen to me , or to anybody who matters to me.

Or, at the very least: that it’s all undeniably unfortunate, but ultimately, in service to some greater good. A justifiable tradeoff; a glitch to be ironed out.

AI optimism requires you to see the lives of at least some of your fellow humans as worthwhile sacrifices; bug reports in a backlog.

But even when there’s no larger system behind it, and even with no broader goal or agenda at all—malicious or otherwise—AI can still amplify existing harms.

One example at the top of my mind: Facebook was recently flooded with AI-generated videos of women being violently strangled. There was no apparent deeper purpose behind this horrifying wave of misogynistic terrorism, however; it just happened to be what the algorithm rewarded. That content generated engagement, and that engagement generated more of the same content.

A similar thing happened recently on TikTok, but this time it was videos of immigrants being ruthlessly brutalized that struck a nerve and triggered a proliferation of objectionable content across the platform.

Sometimes this effect is more or less benign (see: Shrimp Jesus); other times, a machine built to provoke a reaction will inevitably hit paydirt in the horrifying, the traumatizing, the inhumane, and the unacceptable.

AI isn’t just harmful on its own; it’s a force multiplier for existing harms. The intent behind it, if one even exists, is irrelevant; the impact is the same.


I think all of this is why so many of us are so pessimistic about AI; we can see very clearly the many ways it represents a threat to us, and to the things we care about.

For so many, AI stands to take away something important from us and those around us—safety, stability, creativity—and replace it with benefits for somebody else; productivity and profit, going mainly to those above us.

I think so many people are against AI because they see how it functions as a system for taking away from those with the least, to give even more to the already highly privileged.

This is why the promise of AI fixing everything and empowering workers is so important; it’s the linchpin of the whole operation. It’s required to get buy-in from the people who stand to lose the most.

So let’s talk about that next.


Some might argue I’m missing the entire point here, by focusing so much on the present. Optimism isn’t about what’s happening right now, they might say; it’s about the future!

Forget what AI actually is currently ; the models will get better, the data centers more energy-efficient, the tokens cheaper, the mistakes rarer, the harms mitigated, and so on, until we have something that changes the world for the better; an actual benevolent technology that solves our problems, in whatever way. Maybe it even is, or leads to, AGI (actual human-level artificial intelligence; the thing AI used to mean before 2022).

I take issue with these predictions, for several reasons:

  • While I’m sure the technology and its costs will continue to improve, it’s hard to see how that would mitigate most of these harms. Many would just as likely be intensified by greater speed, efficiency, and affordability.

  • I’m wary of predictions in general, 7 but particularly those that bear little to no resemblance to observed reality. Absent a clear evidential link, prediction tends to be based purely on hype and speculation, and there’s a wild overabundance of both around AI.

    It’s reasonable to believe the tech will improve. It seems much less reasonable to think it might suddenly change into something new, develop presently impossible capabilities, or take us somewhere far distant with absolutely no clear path or connection between here and there.

  • Most of the utopian visions of AI center on the idea that AI is sentient, which it categorically, factually, is not . Language and statistics can simply mimic cognition easily, and our human brains are overly eager to anthropomorphize anything that vaguely imitates human behavior. Thinking and reasoning are very different than statistically emulating communication. 8

  • Many LLM experts, including OpenAI’s own researchers, tell us the models are already approaching their realistic ceiling, thanks in no small part to the exhaustion of training data that is not already tainted with AI content. They also tell us it’s literally impossible to stop LLMs from making things up. (Really: actual people from OpenAI publicly admitted LLMs will never stop lying. It’s an un-fixable bug, because it’s a core component of how LLMs work.)

  • Even if we ignore all the technical limitations, or find ways around them: new advancements simply don’t work that way. They never have. (The equitable, worker-liberating way, that is.) Tech doesn’t free workers; it forces them to do more in the same amount of time, for the same rate of pay or less.

    If you become twice as productive, you don’t get twice the pay or twice the time off; you just get twice the workload—likely because somebody else doing the same job just got laid off, and now you’re doing their work, too.

    This sort of technology distributes instability to the many at the bottom, while consolidating benefit at the top—and there has arguably never been a more efficient mechanism for this than AI. I see absolutely no reason to believe this time will be different, especially because:

  • AI models exist in the consolidated hands of a precious few huge companies, which are themselves quite obviously happy to do away with as many of their own workers as they possibly can. AI will serve, and is already serving, corporate interests first and foremost— especially as these models continue to replace core infrastructure, like web search, and can be manipulated however the companies please. 9

  • Regardless, even if you naively believe in the tech: you’re still willing to put up with all the harms and dangers of AI until that imagined potential future arrives—which brings us back to the original point.

Some might also point to positive use cases for AI. Accessibility is a popular one. (In fact, it’s so popular that online AI apologists have realized all they need to do is invoke the word “ableist” to shut down any discussion.)

Yes, there are good use cases for AI. I don’t think most reasonable people would argue with that. Like I said: I sometimes even use it myself, to compensate for my own physical inability. But calling out such cases tends to be a bad-faith attempt to justify all of AI’s other harms by using disabled people and others who might benefit from AI, rather than reckoning with the damage and rethinking our deployment of AI in order to maximize good and minimize harm for everyone . We don’t have to accept every use of AI and all of its impacts just because some of them might be beneficial.

Finally, let me take a moment to address anyone who might be thinking: sure, AI is being used for some bad things, but I’m not personally using it that way. What’s wrong with me just focusing on the good parts and enjoying the benefits to me ?

My friend, that’s privilege. You are literally describing privilege.


Let me close this post the same way it began; with a personal example from my own family.

I have a newborn daughter.

I began writing this post before she was born, and mostly because of her, I’m now finishing it up several weeks later. (I’ve fit most of this writing into her nap schedule, typing as she sleeps beside me.)

And I can’t shake the thought that I’m welcoming her into a world where so much of the potential malicious misuse of AI could one day be directed at her.

Looking beyond all the things we’ve already talked about: technology in general has made things like stalking and abuse easier than ever. But AI goes even further. I live knowing AI will allow any degenerate pervert with an internet connection to create deepfakes of this little girl—up to and including pornography—without any consent at all, at barely the click of a button.

If this sounds like a horrifying, disturbed thought: it is! It absolutely is ! But I’m not coming up with this on my own; this is already happening to untold numbers of women, many of whom are school-aged girls.

To be an AI optimist, I would need to turn away from this. Ignore it. Consider it all just part of the plan; a price to be casually paid (hopefully by somebody else) in exchange for…what? Writing software a little bit faster?

Optimism would require me to believe that my children probably won’t have that kind of experience, or any others I’ve described here.

To believe they’ll be in better schools. Better neighborhoods. Have better friends. Better support systems.

Won’t ever attract the attention of the wrong guy, or piss off the wrong girl.

Won’t ever live in the wrong places. Won’t ever find themselves in the wrong part of the system.

Won’t end up on the wrong side of the accelerated inequality.

AI optimism requires you to see yourself and your loved ones as safe from AI; as the passengers in the self-driving car, and not as the pedestrians it might run over.

I don’t know how you see yourself that way without a great deal of class privilege.

The rest of us?

I guess it’s hard to see the convenience as worth the price—let alone exciting—when you know you could be among the ones paying for it.

Show HN: An endless scrolling word search game

Hacker News
endless-wordsearch.com
2025-12-11 14:01:22
Comments...

Teenagers are presenting Christmas wishlists, Powerpoint-style – my daughter included

Guardian
www.theguardian.com
2025-12-11 14:00:15
A far cry from hand-scrawled letters to Santa, on graphic design platform Canva users have created a whopping 1.4m Christmas wishlist presentationsGet our weekend culture and lifestyle emailTwas three weeks before Christmas, when all through the house, not a creature was stirring, except for my 13-y...
Original Article

T was three weeks before Christmas , when all through the house, not a creature was stirring, except for my 13-year-old daughter, who emerged from her lair with a level of vim uncommon in daylight hours.

As she made her approach with laptop aglow, her droll little mouth was drawn up in a bow. It then became apparent that I was about to become the audience (some may say “victim”) of a recent cultural phenomenon: the Christmas wishlist slideshow.

Graphic design platform Canva seems to be the tool of choice for many teenagers. Canva say the first Christmas wishlist template was added to their library in 2019. Since 2022, people have created more than 3.35m Christmas wishlist designs. Presentation-style wishlists have jumped 61% between 2024 and 2025, totalling 1.4m. Social media is awash with videos of exuberant adolescents in expensive sweatsuits making family presentations on huge TVs, along with countless tutorials on how to make them “aesthetic” – a word I never tire of reminding my children is a noun, not an adjective.

Screenshot of a Green and Red Illustrative Christmas wishlist presentation on Canva by Saga Design Studio. Wednesday 10th Dec, 2025.
An example of a Christmas wishlist presentation on Canva by Saga Design Studio. Photograph: Canva/Saga Design Studio
Screenshot of a Green and Red Illustrative Christmas wishlist presentation on Canva by Saga Design Studio. Wednesday 10th Dec, 2025.
The Christmas wishlist slideshow phenomenon seems predominately led by girls. Photograph: Canva/Saga Design Studio

Suffice to say it was not visions of sugar plums that had been dancing in my daughter’s head. Instead, I was treated to an initial collage of brands and stores that she holds in mysterious esteem, followed by a series of categorised slides covering the teenage perennials: clothes, jewellery, decor, beauty products and, thankfully, some books. While her slideshow featured images, “inspo” and prices, she refrained from the vulgarity of hyperlinks, which, judging by the online evidence, are a common addition. And, I guess, could be handy.

In my rigorous and highly scientific research (read: texting every parent in my contacts list) it was revealed that girls are the main perpetrators, usually embracing this nouveau custom in their late tweens before growing out of it a few years later. Many who were on the receiving end of these pitches appreciated the initiative and practicality, while at the same time lamenting the creep of bland online “get ready with me” and “unboxing” tropes into family traditions. As one friend, whose two boys had never subjected her to the experience, put it – it sounds a bit like people who spend too much time in office mode, then plan their camping holiday meals in an Excel spreadsheet.

A letter to Santa from a Guardian staff member’s eight-year-old child
A letter to Santa from a Guardian staff member’s eight-year-old child. Photograph: Guardian Design/A letter to Santa from a Guardian staff member’s child

Efficient? Certainly. But is it as adorable as the hand-scrawled, misspelled and glitter-smeared entreaties for Santa’s benevolence we knew and loved? Not quite. But as those sanguine mementoes of her innocence gather dust in a drawer somewhere, I was surprised to appreciate the thought, effort and moderation she applied to her newfangled approach. She even added some “dupe” options “as a backup”. Well played, kid.

While it’s hard to feel much seasonal magic emanating from a template-built pitch deck, there’s no denying it beats the look of thinly veiled disappointment when your best attempts fall flat. As my cousin said: “There are times in parenting when you just have to admit defeat and play by the rules of the new guard. God knows there’s only so much pity one can take from their teenager!”

Armed with a mood board of colour palettes, preferred necklines, coveted cosmetics and (weep) a teddy bear, I told her, with confidence, it all looked feasible. Her eyes – how they twinkled! Her dimples, how merry!

And there it was anew, that joy parents live for. Now, go to bed, please.

AIPAC Spent Millions to Keep Her Out of Congress. Now, She Sees an Opening.

Intercept
theintercept.com
2025-12-11 14:00:00
Growing dissatisfaction with the Israel lobby may pave a lane for Nida Allam, who launched her congressional campaign in North Carolina Thursday with the backing of Justice Democrats. The post AIPAC Spent Millions to Keep Her Out of Congress. Now, She Sees an Opening.  appeared first on The Intercep...
Original Article

A progressive North Carolina official who lost her 2022 congressional race after the pro-Israel lobby spent almost $2.5 million against her sees a fresh opening this midterm cycle, as a public disturbed by the genocide in Gaza has turned pro-Israel spending into an increasing liability.

Durham County Commissioner Nida Allam is preparing for a rematch against Rep. Valerie Foushee, D-N.C., for the 4th Congressional District seat she lost by nine points in 2022. This time, the Israel lobby’s potential influence has shifted: Feeling the pressure from activists and constituents, Foushee has said she won’t accept money from the American Israel Public Affairs Committee.

Allam, who launched her campaign Thursday with the backing of the progressive group Justice Democrats, told The Intercept that wouldn’t be a shift for her.

“I’ve never accepted corporate PAC or dark money, special interest group money, or pro Israel lobby group money,” said Allam, whose 2020 election to the county commission made her the first Muslim woman elected to public office in North Carolina.

The country’s top pro-Israel lobbying groups and the crypto industry spent heavily to help Foushee beat Allam in 2022, when they competed in the race for the seat vacated by former Rep. David Price, D-N.C. AIPAC’s super PAC, United Democracy Project, and DMFI PAC, another pro-Israel group with ties to AIPAC, spent just under $2.5 million backing Foushee that year. The PAC funded by convicted crypto fraudster Sam Bankman-Fried also spent more than $1 million backing Foushee.

After nearly two years of pressure from activists in North Carolina enraged by Israel’s genocide in Gaza, Foushee announced in August that she would not accept AIPAC money in 2026, joining a growing list of candidates swearing off AIPAC money in the face of a new wave of progressive challengers.

This time, if pro-Israel and crypto groups spend in the race, it’s on Foushee to respond, Allam said.

“If they decide to spend in this, then it comes down to Valerie Foushee to answer, is she going to stand by the promise and commitment she made to not accept accept AIPAC and pro-Israel lobby money?” Allam said. “This district deserves someone who is going to be a champion for working families, and you can’t be that when you’re taking the money from the same corporate PAC donors that are funding Republicans who are killing Medicare for all, who are killing an increased minimum wage.”

Foushee’s campaign did not immediately respond to a request for comment.

Allam, who helped lead Bernie Sanders’s 2016 presidential campaign in North Carolina, is the seventh candidate Justice Democrats are backing so far this cycle. The group — which previously recruited progressive stars including Reps. Alexandria Ocasio-Cortez, D-N.Y., and Ilhan Omar, D-Minn. — is endorsing candidates challenging incumbents next year in Michigan , California , New York, Tennessee, Missouri, and Colorado . Justice Democrats is taking a more aggressive approach to primaries this cycle after only endorsing its incumbents last year and losing two major seats to pro-Israel spending . The group plans to launch at least nine more candidates by January, The Intercept reported .

Allam unveiled her campaign with other endorsements from independent Vermont Sen. Bernie Sanders, Sunrise Movement, the Working Families Party, and Leaders We Deserve, a PAC launched by progressive organizers David Hogg and Kevin Lata in 2023 to back congressional candidates under the age of 35. She said she sees the local impacts of the Trump administration on working families every day in her work as a Durham County commissioner.

“What I’m hearing from our residents every single day is that they don’t feel that they have a champion or someone who is standing up and fighting for them at the federal level, and someone who is advocating for working families,” she said. “This is the safest blue district in North Carolina and this is an opportunity for us as a Democratic Party to have someone elected who is going to be championing the issues for working families — like Medicare for All, a Green New Deal — and has a track record of getting things done at the local level.”

Allam is rejecting corporate PAC money and running on taking on billionaires and fighting Immigration and Customs Enforcement, which has been carrying out raids and arresting residents in the district. She’s also supporting a Green New Deal, Medicare for All, and ending military aid to Israel. She began considering a run for office after a man murdered her friends in the 2015 Chapel Hill shootings .

Small dollar donors powered Allam’s 2022 campaign, when she raised $1.2 million with an average donation of $30. She’s aiming to replicate that strategy this cycle, she said.

“Trump is testing the waters in every way possible,” Allam said. “The only way that we’re going to be able to effectively fight back against Trump is by passing the Voting Rights Act, is by taking big corporate money out of our elections, by ending Citizens United. Because they’re the same ones who are fighting against our democracy.”

In its release announcing Allam’s campaign on Thursday, Justice Democrats criticized Foushee for taking money from corporate interests, including defense contractors who have profited from the genocides in Gaza and Sudan. “In the face of rising healthcare costs, creeping authoritarianism, and ICE raids, and the highest number of federal funding cuts of any district in the country, leadership that only shows up to make excuses won’t cut it anymore,” the group wrote.

Foushee served in the North Carolina state legislature for more than two decades before being elected to Congress in 2022. She first campaigned for Congress on expanding the Affordable Care Act and moving toward Medicare for All, passing public campaign financing and the Voting Rights Act, and a $15 minimum wage. Since entering Congress in 2023, Foushee has sponsored bills to conduct research on gun violence prevention, to expand diversity in research for artificial intelligence, establish a rebate for environmental roof installations, and support historically Black colleges and universities.

Foushee’s evolving stance on some Israel issues reflects a broader shift among Democrats under pressure from organizers and constituents.

Amid rising public outrage over the influence of AIPAC in congressional elections in recent years, Foushee faced growing criticism and protests in the district over her refusal to call for a ceasefire in Gaza and her support from the lobbying group . After organizers tried to meet with her and held a demonstration blocking traffic on a freeway in the district, she signed onto a 2023 letter calling for a ceasefire but did not publicize her support for the letter or comment on it publicly, The News & Observer reported .

At a town hall in August, an attendee asked Foushee if she regretted taking AIPAC money. In response, she said she would no longer accept money from the group. Three days later, she co-sponsored Illinois Rep. Delia Ramirez’s Block the Bombs to Israel Act to limit the transfer of defensive weapons to Israel.

“We cannot allow AIPAC and these corporate billionaires to scare us into silence,” Allam said. “It’s actually our mandate to take them on directly, especially now as they’re losing their sway in the Democratic Party.”

"Slower Form of Death": Despite Ceasefire, Israel Keeps Killing in Gaza as Winter Storm Floods Tents

Democracy Now!
www.democracynow.org
2025-12-11 13:54:19
Palestinians were battered with rain and freezing temperatures overnight as winter storm Byron hit the Gaza Strip. Soaked tents and makeshift shelters flooded, causing some mattresses to float and improvised roofs to blow away. An 8-month-old baby girl, Rahaf Abu Jazar, died from hypothermia. Mouree...
Original Article

Hi there,

Democracy Now!’s independent journalism is more vital than ever. We continue to spotlight the grassroots movements working to keep democracy alive. No time has been more crucial to amplify the voices that other outlets ignore. Please donate today, so we can keep delivering fact-based, fearless reporting.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Palestinians were battered with rain and freezing temperatures overnight as winter storm Byron hit the Gaza Strip. Soaked tents and makeshift shelters flooded, causing some mattresses to float and improvised roofs to blow away. An 8-month-old baby girl, Rahaf Abu Jazar, died from hypothermia. Moureen Kaki, an aid worker living in Gaza, says conditions at hospitals have not improved since the announcement of the so-called ceasefire. “It is not really a ceasefire,” she says. “It’s just a slower form of death.”



Guests
  • Moureen Kaki

    head of mission for GLIA International, a medical aid organization.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

"My Advice to Parents Is Learn from Your Kids": Mahmood Mamdani on Raising Zohran, NYC's Next Mayor

Democracy Now!
www.democracynow.org
2025-12-11 13:47:54
The acclaimed academic and writer Mahmood Mamdani speaks with Democracy Now! about the rise of his son, New York City Mayor-elect Zohran Mamdani. The professor cites Zohran’s “refusal to budge, to soften his critique of the state of Israel” as a critical aspect of his rise to power...
Original Article

Hi there,

Democracy Now!’s independent journalism is more vital than ever. We continue to spotlight the grassroots movements working to keep democracy alive. No time has been more crucial to amplify the voices that other outlets ignore. Please donate today, so we can keep delivering fact-based, fearless reporting.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

The acclaimed academic and writer Mahmood Mamdani speaks with Democracy Now! about the rise of his son, New York City Mayor-elect Zohran Mamdani. The professor cites Zohran’s “refusal to budge, to soften his critique of the state of Israel” as a critical aspect of his rise to power. “His refusal to change his stance told the electorate that this was a man of principle, that affordability was not just merely rhetoric, that he could be taken seriously at his word,” Mahmood says.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Craft software that makes people feel something

Hacker News
rapha.land
2025-12-11 13:45:08
Comments...
Original Article

So, I woke up today. Got my coffee, family went to sleep, and I have a free afternoon.

I thought about writing something. I may delete this article, but if you are reading this, it means I went through with it.

Recently, people have been asking me why I’m pausing Boo to work on a programming language. I think it would actually be cool to write down how I feel.

Boo is a code editor I created solely for myself; I never had the intention of making it a mainstream editor. Of course, it would be fun if people used it, but that was never my goal. This year I got it working in a functional state, where I can actually use it for my daily work. It has innovative human-keyboard navigation and replaces the LSP system with something faster and less costly for the OS. So why on earth am I not open-sourcing it? That’s what people keep asking me.

First, let’s go step by step.

My mind isn’t really moved by the idea that it would be a success or a failure — the end user of Boo is me. I don’t feel it’s there yet; in fact, I think software should inspire us. Working on Rio Terminal and Boo in my free time — both written in Rust and sharing many similarities — affects my joy, because it starts to become something automatic. Both have similar architecture, language, release process, and etcetera.

Since I was a kid, I liked to build Lego blocks. That’s probably what I did the most besides playing football or video games. The fun thing about Lego is that one day you can build a castle, and the next day you can build a ship. Not necessarily using the same pieces and colors — you can actually add a lot of stuff that’s external to what you have, like a wood stick.

When programming becomes repetitive, the odds of you creating something that makes people go “wow” are reduced quite a bit. It isn’t a rule, of course. You need to be inspired to make inspiring software.

I always use the example of The Legend of Zelda: Breath of the Wild . This game is so well crafted that I know people who don’t even like video games but bought a console just to play it — and once they finished, they sold everything. This is what I’m talking about: taking time to build something so that once people try it, they remember it for as long as they live.

Boo isn’t a business. I don’t need or want to make money out of it. I don’t have a deadline, nor do I want to create another VS Code. I don’t feel like forcing it to happen.

In that case, I don’t necessarily need to stop building Lego blocks, right? I’ll just park it there, and when the inspiration comes back, I’ll pick it up where it was. That being said, I paused Boo, and I am working on my own programming language. Eventually, my idea is to rewrite Boo to use it.

“Wow! That’s a lot of work.” Indeed. But it’s my hobby stuff. I’ve always loved programming languages, and I am having a blast learning more about binaries and compilers. So, I don’t really feel I need to follow people’s cake recipe for success. That’s how my mind works, and I will stick with it.

By the way, this article was written using Boo.

French supermarket's Christmas advert is worldwide hit (without AI) [video]

Hacker News
www.youtube.com
2025-12-11 13:35:55
Comments...

"Slow Poison": Scholar Mahmood Mamdani on New Book About Uganda, Decolonization & More

Democracy Now!
www.democracynow.org
2025-12-11 13:29:39
We speak with the acclaimed academic and writer Mahmood Mamdani, who has just released a new book, Slow Poison: Idi Amin, Yoweri Museveni, and the Making of the Ugandan State. Mamdani, who has taught at Columbia for decades, was raised in Uganda and first came to the United States in the 1960s to st...
Original Article

Hi there,

Democracy Now!’s independent journalism is more vital than ever. We continue to spotlight the grassroots movements working to keep democracy alive. No time has been more crucial to amplify the voices that other outlets ignore. Please donate today, so we can keep delivering fact-based, fearless reporting.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

We speak with the acclaimed academic and writer Mahmood Mamdani, who has just released a new book, Slow Poison: Idi Amin, Yoweri Museveni, and the Making of the Ugandan State . Mamdani, who has taught at Columbia for decades, was raised in Uganda and first came to the United States in the 1960s to study. He and his family were later expelled from Uganda during Idi Amin’s dictatorship. The book “is about the reversal of the anti-colonial movement” in Uganda, says Mamdani. “The anti-colonial movement fought to create a nation out of a fragmented country … and I speak of slow poison as a gradual, piecemeal, step-by-step cutting up of the country so that you no longer have a single citizenship.”



Guests
  • Mahmood Mamdani

    professor of government and professor of anthropology and Middle Eastern, South Asian, and African Studies at Columbia University. He previously served as director of the Makerere Institute of Social Research in Kampala.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation

Why no fish wants a tongue-eating parasitic louse in its mouth

Hacker News
animals.howstuffworks.com
2025-12-11 13:28:12
Comments...
Original Article
parasitic louse
Parasite got your tongue? This white trevally has had its tongue eaten by a parasitic louse, Cymothoa exigua . Shutterstock

We are not predisposed to think well of parasites, but not all organisms that freeload off others are created equal. For instance, some parasites steal food that others have gathered, some force other animals to raise their babies , and some just use other organisms for locomotion . However, some parasites slowly kill, suck the life force from or even control the minds and actions of their hosts. It is very unusual, however, for a parasite to ingest and then replace one of its host's body parts, but, hey, this is a big, weird world and evolution has dabbled in a little bit of everything.

Prosthetic Parasites

Take, for instance, the parasitic isopod Cymothoa exigua , which parasitizes fish. This isopod , which is a crustacean like a shrimp or a lobster (it looks a bit like a roly poly , or sow bug, which is a terrestrial crustacean), lives in the ocean and makes a living off a few different species in the perch family — mostly snappers and drums. The living they make might seem a little much (read: Boschian horror show) for our refined human tastes, but Cymothoa exigua makes an honest living attaching to a fish's tongue, sucking blood from it until it falls off, and then replacing it by gripping onto the tongue stump and acting as a prosthetic tongue for the rest of its host's life.

"It is now in a position to consume what the fish is eating, or consume its blood and tissue," says Regina Wetzer, curator and director of the Marine Biodiversity Center at the Natural History Museum of Los Angeles County.

Although there are other mouth-infesting isopods out there feeding on other types of fish like barramundi or mahi-mahi, Cymothoa exigua is the only one known to science that eats and then replaces the tongue.

How Protandric Hermaphrodites Work

"Isopods in the family Cymothoidae are parasites of fishes, and as juveniles all cymothoids must find and attach to their host," says Wetzer. "In the case of Cymothoa exigua — the species that is attached to the tongue — males enter the body via the gills, mature, mate and females move to the tongue."

How, you might ask, do the females enter the fish if only male juveniles enter the fish's gill slits? Cymothoa exigua is a protandric hermaphrodite, meaning it has the ability to change from male to female once it is an adult. In other words, all of them are born male, and as a baby enters a fish's gills, it keeps maturing as a male, but once another male juvenile shows up, the first one gets the signal that it's time to transform into a female. After this business has been sorted out, the female crawls into the fish host's mouth and starts feeding on its tongue. Once she has replaced the tongue, she is free to mate with any of the males hanging out in the fish's gill chamber and raise babies in a nice, safe cave.

parasitic louse
The Cymothoa exigua , or tongue-eating louse, is a parasitic crustacean of the family Cymothoidae . It enters a fish (here a sand steenbras, Lithognathus mormyrus ) through the gills and then attaches itself to the fish's tongue.

Wikimedia Commons (CC By SA 3.0)

How to Replace Somebody's Tongue

Cymothoa exigua is a powerful little crustacean, with seven pairs of legs tipped with spines, which help her anchor into the fish's mouth. However, the first step in the process is to use her five sets of jaws modified with a variety of ice-pick-like tubes to puncture the fish's tongue and suck out the fish's blood. This process, by the way, is not thought to be very pleasant for the fish.

As the isopod drains the fish's tongue of blood, the muscle itself atrophies and withers away. At this point, she grasps what remains of the tongue stub with three or four of her spined leg sets and digs in, functionally replacing the tongue all together.

As unpleasant as this is, these isopods generally don't kill their host. However, Cymothoa exigua does not survive well without a host.

"Without its host adult, fully mature isopods would not survive well, as it's an obligate parasite," says Wetzer. "It has lousy swimming capabilities and gravid females — females with eggs and juveniles in her pouch — are especially non-agile. This is in contrast to some species [of the same family of isopods] which are free-living and can occur in such large numbers that they can deflesh a fish or body entirely."

Hackers exploit unpatched Gogs zero-day to breach 700 servers

Bleeping Computer
www.bleepingcomputer.com
2025-12-11 13:19:50
An unpatched zero-day vulnerability in Gogs, a popular self-hosted Git service, has enabled attackers to gain remote code execution on Internet-facing instances and compromise hundreds of servers. [...]...
Original Article

Gogs

An unpatched zero-day vulnerability in Gogs, a popular self-hosted Git service, has enabled attackers to gain remote code execution on Internet-facing instances and compromise hundreds of servers.

Written in Go and designed as an alternative to GitLab or GitHub Enterprise, Gogs is also often exposed online for remote collaboration.

CVE-2025-8110 , the Gogs RCE vulnerability exploited in these attacks, stems from a path traversal weakness in the PutContents API. The flaw allows threat actors to bypass the protections implemented for a previously patched remote code execution bug (CVE-2024-55947) by using symbolic links to overwrite files outside the repository.

While Gogs versions that addressed the CVE-2024-55947 security bug now validate path names to prevent directory traversal, they still fail to validate the destination of symbolic links. Attackers can abuse this by creating repositories containing symbolic links pointing to sensitive system files, and then using the PutContents API to write data through the symlink, overwriting targets outside the repository.

By overwriting Git configuration files, specifically the sshCommand setting, attackers can force target systems to execute arbitrary commands.

Wiz Research discovered the vulnerability in July while investigating a malware infection affecting a customer's Internet-facing Gogs server. In total, the researchers found over 1,400 Gogs servers exposed online , with more than 700 instances showing signs of compromise.

Gogs servers exposed online
Gogs servers exposed online (Shodan)

​All compromised instances identified during the investigation of these attacks showed identical patterns, including repositories with random eight-character names created within the same timeframe in July, suggesting a single actor or group using automated tools is behind the campaign.

"In our external scan, we identified over 1,400 Gogs servers publicly exposed to the internet. Many of these instances are configured with 'Open Registration' enabled by default, creating a massive attack surface," they said .

Wiz also found that the malware deployed was created using Supershell, an open-source command-and-control (C2) framework that establishes reverse SSH shells over web services. Further analysis revealed the malware communicated with a command-and-control server at 119.45.176[.]196.

The researchers reported the vulnerability to Gogs maintainers on July 17, and the maintainers acknowledged the flaw on October 30, when they were still developing a patch. According to a disclosure timeline shared by Wiz Research, a second wave of attacks was observed on November 1.

Gogs users are advised to immediately disable the open registration default setting and limit access to the server using a VPN or an allow list. Those who want to check whether their instance has already been compromised should look for suspicious use of the PutContents API and for repositories with random 8-character names.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Is War Next? U.S. Seizes Venezuelan Oil Tanker as Anti-Maduro Campaign Escalates

Democracy Now!
www.democracynow.org
2025-12-11 13:13:03
U.S. troops seized an oil tanker off the coast of Venezuela on Wednesday, a major escalation that the Venezuelan government called “international piracy.” We speak with New York University professor Alejandro Velasco about the Trump administration’s intentions in the country, which...
Original Article

This is a rush transcript. Copy may not be in its final form.

NERMEEN SHAIKH : In a major escalation, U.S. troops seized an oil tanker off the coast of Venezuela, a day after U.S. fighter jets flew over the Gulf of Venezuela, the closest the U.S. has come to the country’s airspace. Attorney General Pam Bondi released video showing U.S. forces rappelling from helicopters and pointing weapons at sailors. Bondi claimed the tanker had been used to transport sanctioned oil from Venezuela and Iran. President Trump confirmed the raid while speaking with reporters at the White House.

PRESIDENT DONALD TRUMP : As you probably know, we’ve just seized a tanker on the coast of Venezuela, large tanker, very large, largest one ever seized, actually. And other things are happening, so you’ll be seeing that later, and you’ll be talking about that later with some of the people. …

REPORTER 1: We’re interested in the seizure of this tanker. What happens to the oil on this ship?

PRESIDENT DONALD TRUMP : Well, we keep it, I guess. I don’t know.

REPORTER 2: Where does it go? What port does it go to?

PRESIDENT DONALD TRUMP : Well, you’ll have to follow the tanker. You know, you’re a good newsman. Just follow the tanker.

REPORTER 3: Do you know where it was going? Was it going to China?

PRESIDENT DONALD TRUMP : Follow it. Follow it. Get a helicopter. Follow the tanker.

REPORTER 4: Is it true it was going to Cuba?

PRESIDENT DONALD TRUMP : But we’re going to — I assume we’re going to keep the oil.

AMY GOODMAN : The Venezuelan government denounced the action, calling it “blatant theft” and “an act of international piracy.” Venezuelan President Nicolás Maduro spoke in Caracas Wednesday.

PRESIDENT NICOLÁS MADURO : [translated] Anyone who wants Venezuelan oil must respect the law, the constitution and national sovereignty and get down to producing, invest and sell our oil. Venezuela an oil colony? Never again, neither a colony nor slaves.

AMY GOODMAN : He also started singing “Don’t Worry, Be Happy.”

The U.S. seizure of the oil tanker comes as the Pentagon ramps up its military buildup in the Caribbean ahead of possible strikes on Venezuela. Since September, the U.S. has carried out more than 20 deadly strikes on alleged drug boats in the Caribbean and the Pacific, though they’ve never offered evidence.

Meanwhile, on Wednesday, the Nobel Peace Prize was awarded to the right-wing Venezuelan opposition leader María Corina Machado. On Tuesday night, hundreds of protesters marched in Oslo to condemn the selection of Machado, who’s supported Trump’s threats against the Venezuelan government. In October, she dedicated the peace prize to President Trump. Machado did not attend the prize ceremony on Wednesday but later appeared in Oslo waving to supporters from the balcony of her hotel. She spoke at the Norwegian parliament today. CNN reports the United States gave her support to travel to Oslo from Venezuela, where she had been in hiding. She apparently, last step, flew from Bangor, Maine, to Oslo.

NERMEEN SHAIKH : We’re joined by Alejandro Velasco, associate professor at New York University, where he’s a historian of modern Latin America. Velasco is a former executive director of NACLA Report on the Americas and the author of Barrio Rising: Urban Popular Politics and the Making of Modern Venezuela . He was born and raised in Venezuela.

Welcome back to the show, Alejandro. So, if you could comment on these latest developments, the U.S. coming the closest it has to invading Venezuela’s airspace, and then also Machado receiving the Nobel Peace Prize?

ALEJANDRO VELASCO : Yeah, no, for sure. But on the one hand, you have to think of it as an escalation on two fronts. The most apparent front, of course, is the military escalation. Even though they’re calling it a legal maneuver, more of a high seas stakes kind of operation, in fact, the United States has amassed the greatest number of troops in the 21st century in the Caribbean. And so, this absolutely escalates this kind of war of chicken with the Venezuelan government.

But it’s also, in some ways, an escalation of illegality. The United States has in the past seized Venezuelan assets. It now controls millions of Venezuelan Citgo assets, that it has essentially, you know, kidnapped for ransom in the United States. The U.K. has also held Venezuelan gold in the wake of 2019’s crisis that saw Juan Guaidó become sort of interim president, self-proclaimed. And so, that kind of escalation is very significant and worrisome.

It’s been interesting that the Venezuelan government’s reaction has been really disciplined. They’ve not fallen into the trap of trying to be goaded into some kind of, you know, response that would surely bring greater military action on the part of the United States.

Now, on the other hand, of course, you have the María Corina Machado ceremony. She had said that she would not leave Venezuela until the final battle is won. And so, now the question is: Will she be able to return, or will she run the fate of many Venezuelan politicians in the opposition who have lived out their promise in exile?

NERMEEN SHAIKH : But was she under a travel ban? She had apparently not seen her own children. Her daughter received the prize in her stead. She had not seen the kids for a year or possibly two. What kind of ban was she under?

ALEJANDRO VELASCO : So, the Venezuelan government had an arrest warrant on her, and so they had alleged that she had violated campaign laws, political actions, as well. And so, certainly, she was under hiding. And, of course, her concern was, if she was going to be detained, then she might suffer the consequences of torture or other kinds of violations. But, of course, her profile is so significant. It’s so high-profile that it’s also somewhat far-fetched to imagine that the Venezuelan government would in fact detain her, rather than in fact see what she’s doing now, which is to leave.

AMY GOODMAN : Let me play María Corina Machado speaking at the Norwegian parliament today.

MARÍA CORINA MACHADO : I am very very hopeful Venezuela will be free, and we will turn our country into a beacon of hope and opportunity of democracy. And we will welcome not only the Venezuelans that have been forced to flee, but citizens from all over the world that will find a refuge, as Venezuela used to be decades ago.

AMY GOODMAN : On Tuesday night, hundreds of protesters marched in Oslo to condemn the selection of Machado for the Nobel Peace Prize because she has supported Trump’s threats against Venezuela. In October, she dedicated the peace prize to Trump. This is Lina Álvarez of the Norwegian Solidarity Committee for Latin America in Oslo.

LINA ÁLVAREZ REYES : [translated] We are here in a demonstration organized together with a broad alliance of Norwegian solidarity and peace organizations, where we are highlighting that the Nobel Prize is being used to legitimize military intervention. This year’s Nobel Prize winner has not distanced herself from the interventions and the attacks we are seeing in the Caribbean, and we are stating that this clearly breaks with Alfred Nobel’s will.

AMY GOODMAN : So, Professor Velasco, if you can comment on this? And then talk about this tanker. President Trump boasted it’s the largest tanker ever seized.

ALEJANDRO VELASCO : Yeah, it’s hard to parse the Nobel Prize Committee’s selection and then how they have proceeded over the last few weeks since the announcement. The history of the Nobel Prize being awarded to politicians and opposition politicians is, let’s just say, not a very storied one. More recently, of course, you have Barack Obama, who had been awarded the prize, and, before that, Aung San Suu Kyi, the Burmese politician. And at the time, they said that these were more aspirational awards for, hopefully, what would come, if they in fact reached the kind of power that they wanted. But in both of those cases, of course, we did not see peace. We saw, in the case, unfortunately, of Obama, war, and of Aung San Suu Kyi, an authoritarian turn.

So, part of the question here is: Why would the Nobel Prize Committee take another chance on an opposition politician who has been so vocal in requesting and demanding an armed intervention of her country, even, of course, as, on the other hand, she would say, like, “Well, we want peace, and we want democracy”? This is what the Norwegians — you know, the prime minister conversation was like. So, that’s on the one hand: What is going on with the Norwegians, who have in the past tried to broker some kind of negotiation with the Venezuelan government?

But in terms, of course, of the oil question, it’s a kind of a two-handed approach on — you have this seeming carrot of, you know, will we seek peace, and bringing the Norwegians along, but, on the other hand, we’re seizing ships, we are launching military aircraft, fighter jets, right literally off the coast of Venezuela, all in an effort to try to goad the Venezuelan government to a kind of misstep that would then justify some military intervention.

NERMEEN SHAIKH : But what do we know about this ship? U.S. officials, of course, say the ship had been previously linked to the smuggling of Iranian oil. The final destination of the ship was indeed Asia. Can you talk about the claim that the Venezuelan state-owned oil company PDVSA is part of a global black market network?

ALEJANDRO VELASCO : So, Venezuela’s oil, and PDVSA in particular, has been sanctioned since the first Trump administration. And those, as we call, sectoral sanctions are part of a maximum-pressure campaign on the part of the U.S. to try to force the Venezuelan government out of power. They, of course, you know, withheld — withstood that pressure. But it does mean that Venezuelan oil and Venezuelan oil interests and assets abroad, as well as domestically, are under threat.

The paradox here is that Venezuela continues to sell oil to the United States. So, on the one hand, we have the seizure of a tanker, and then other tankers that are just finding their way to the United States by way of licenses to be sold in the U.S. market. And so, part of this is this larger narrative of Venezuelan narcoterrorism and this access between Iran and Venezuela and Cuba. But, on the other hand, you have the continuation of politics as normal. So, it’s hard, extremely difficult to parse what the actual intentions are.

AMY GOODMAN : So, let’s talk about Venezuela having the world’s largest oil reserves. It’s under threat from the United States, and followed by Colombia. On Wednesday, President Trump threatened Colombian President Gustavo Petro.

PRESIDENT DONALD TRUMP : Colombia is producing a lot of drugs, a lot of — they have cocaine factories that they make cocaine, as you know, and they sell it right into the United States. So, he’d better wise up, or he’ll be next. He’ll be next, too. And I hope he’s listening. He’s going to be next.

AMY GOODMAN : “I hope he’s listening,” he says about Petro. “He’s going to be next.” He also brought in narcotrafficking. It’s important to note that in this past week, he pardoned a major narcotrafficker — right? — the former Honduran President Juan Orlando Hernández, who was sentenced in a U.S. court to 45 years in prison, served about a year of that, accused of bringing in, using all the levers of the Honduran state, military, police, helping to facilitate 400 tons of cocaine into the United States. So, talk about Colombia.

ALEJANDRO VELASCO : I mean, Colombia has been nothing if not a stable partner for the United States in terms of drug interdiction. It is, continues to be a major oil — drug producer and exporter, although Ecuador has now become far more. And, of course, Ecuador has a friendly ally to the Trump administration as president currently, and so they’re not talking about Ecuador.

But, you know, what this demonstrates is that, certainly on the part of Trump, but also on the part of Marco Rubio, Pete Hegseth and others in the U.S. government, Venezuela is not the only target. It’s other Latin American governments. And questions about narcoterrorism are really subterfuge claims, in fact, to get rid of leftist governments in the region. Right? This is what we see with Colombia. We’ve seen all the threats to Mexico, as well, which has a leftist government, as well. So, this has a lot to do with ideology, despite the claims that it’s impact about drugs.

AMY GOODMAN : Before you go, I wanted to ask you about what is a national story but also has international significance, this Miami mayoral race, Florida voters electing a Democratic mayor and the first woman, for the first time a Democratic mayor in 30 years. In a stunning upset, the former County Commissioner Eileen Higgins received about 59% of the vote, defeating the Cuban-born Republican, Emilio González, who had been endorsed by Trump. How does this relate to what we’re seeing now? Trump reportedly has been extremely affected by these two races this week. One was her upset victory. And another, a smaller race in Georgia, in a Trump region, a state legislator, Democrat, won this time around. But what about Miami?

ALEJANDRO VELASCO : It’s massive, and especially in the worldview of Trump and his ties to Florida, in particular, but his sense that Florida was now in the bag for Trump. And this tremendous upset — and it wasn’t even close, right? And we’re talking massive margin — suggests that perhaps the message, the bellicose message, that the Cuban American community, most of the Cuban American community in South Florida —

AMY GOODMAN : And if you can bring up Rubio in this, the secretary of state, and his role in what’s happening, a Cuban American from Miami?

ALEJANDRO VELASCO : Of course. I mean, yes, this sort of —

AMY GOODMAN : From Florida.

ALEJANDRO VELASCO : — staid, Cold War-era discourse being brought up again. What it suggests is that it’s perhaps run a bit of its course, and the interests of people in Miami, as elsewhere in the country, especially Republican voters, is much more fixated on: What are you doing for me here at home? Why are we worrying about interventions abroad? What are you doing for us here at home? And this is a tremendous warning, I think, to the Trump administration, Trump in particular, that the shift — the focus has to shift to domestic interests, especially around the economy, rather than these war games with tremendously high stakes in the Caribbean.

AMY GOODMAN : Well, we want to thank you, Alejandro Velasco, for joining us, associate professor at New York University, historian of modern Latin America, former executive editor of NACLA Report on the Americas , author of Barrio Rising: Urban Popular Politics and the Making of Modern Venezuela .

Coming up, acclaimed academic and writer Mahmood Mamdani. He’s author of the new book, Slow Poison . He’s also the father of the New York Mayor-elect Zohran Mamdani. Stay with us.

[break]

AMY GOODMAN : “Peaceable Kingdom” by Patti Smith, performed at Democracy Now! ’s 20th anniversary, as we move into our 30th anniversary this February.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Headlines for December 11, 2025

Democracy Now!
www.democracynow.org
2025-12-11 13:00:00
Caracas Condemns U.S. Seizure of Oil Tanker Off Venezuela’s Coast as “International Piracy”, “He’d Better Wise Up, or He’ll Be Next”: Trump Threatens Colombian President Gustavo Petro, Child Freezes to Death as Torrential Rains Flood Tents of Gaza’s Di...
Original Article

Hi there,

Democracy Now!’s independent journalism is more vital than ever. We continue to spotlight the grassroots movements working to keep democracy alive. No time has been more crucial to amplify the voices that other outlets ignore. Please donate today, so we can keep delivering fact-based, fearless reporting.

Every dollar makes a difference

. Thank you so much!

Democracy Now!
Amy Goodman

Non-commercial news needs your support.

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Donate

Independent Global News

Donate

Headlines December 11, 2025

Watch Headlines

Caracas Condemns U.S. Seizure of Oil Tanker Off Venezuela’s Coast as “International Piracy”

Dec 11, 2025

U.S. forces seized a tanker loaded with crude oil off the coast of Venezuela Wednesday, as the Pentagon ramps up its military buildup in the Caribbean ahead of possible strikes on Venezuela. Attorney General Pam Bondi announced the seizure of the 20-year-old tanker named The Skipper in a social media post, accompanied by video showing soldiers rappelling from helicopters and pointing weapons at sailors. Bondi said Coast Guard, FBI and Homeland Security officers carried out a seizure warrant for the tanker, which she said was used to transport sanctioned oil from Venezuela and Iran. At the White House, President Trump confirmed the raid.

President Donald Trump : “As you probably know, we’ve just seized a tanker on the coast of Venezuela, large tanker, very large, largest one ever seized, actually.”

Reporter : “We’re interested in the seizure of this tanker. What happens to the oil on this ship?”
bq. President Donald Trump : “Well, we keep it, I guess. I don’t know.”

Venezuela’s government condemned the seizure as an “act of international piracy.” It comes after the Pentagon carried out more than 20 strikes on alleged drug boats that human rights groups have condemned as murder.

“He’d Better Wise Up, or He’ll Be Next”: Trump Threatens Colombian President Gustavo Petro

Dec 11, 2025

President Trump signaled Wednesday he may expand his attacks on alleged narcotraffickers to Colombia, following up his threats against Venezuelan President Nicolás Maduro with a new threat against Colombian President Gustavo Petro.

President Donald Trump : “Colombia is producing a lot of drugs, a lot of — they have cocaine factories that they make cocaine, as you know, and they sell it right into the United States. So, he’d better wise up, or he’ll be next. He’ll be next, too.”

We’ll have more on this story after headlines.

Child Freezes to Death as Torrential Rains Flood Tents of Gaza’s Displaced Palestinians

Dec 11, 2025

In Gaza, hundreds of thousands of displaced Palestinians are struggling to stay warm and dry as a fierce winter storm brings heavy rains and flash flooding to the territory. Forecasters are predicting two months’ worth of rain to fall on Gaza over just two days, threatening to flood makeshift tents housing families. Already the storm has claimed at least one life. Eight-month-old Rahaf Abu Jazar died of cold exposure earlier today after water flooded her family’s tent in Khan Younis.

Meanwhile, Israel’s military continues to violate the U.S.-brokered ceasefire deal it agreed to in October. Health officials report four bodies and 10 injured Palestinians were brought to hospitals over the last 24 hours. That brings the number of Palestinians killed since the truce was declared to 383. This is Umm al-Abd al-Jarjawi, the aunt of a Palestinian killed in an Israeli airstrike on Monday.

Umm al-Abd al-Jarjawi : “They were sitting in their home thinking they were safe, because there is a ceasefire and nothing is happening. He was bombed while he was at home. His mother was few steps away from him. God saved her. … There is no safety here. There is no consideration for the ceasefire. The war is still going on. People are bombed every day in their homes.”

Five Palestine Action Supporters Hospitalized While on Hunger Strike to Protest Imprisonment

Dec 11, 2025

In Britain, five political prisoners awaiting trial for supporting the banned protest group Palestine Action have been hospitalized due to deteriorating health as a result of hunger strikes. It’s now the largest coordinated hunger strike in U.K. prisons since the 1981 Irish Republican protests led by political prisoner Bobby Sands.

House of Representatives Passes $901 Billion National Defense Authorization Act

Dec 11, 2025

On Capitol Hill, the House of Representatives has passed the $901 billion National Defense Authorization Act. Combined with a supplemental bill passed earlier this year, the NDAA would expand the U.S. military’s budget to over $1 trillion. The bill drew bipartisan support, passing on a vote of 312 to 112, with 94 Democrats and 18 Republicans in opposition. Minnesota Democratic Congresswoman Ilhan Omar voted no. She said it was because “Congress cannot continue writing blank checks for endless war while millions of Americans struggle to afford housing, healthcare, and basic necessities.”

NTSB Chair Says New Military Bill Would Make Washington, D.C., Airspace Less Safe

Dec 11, 2025

The chair of the National Transportation Safety Board warned Wednesday that a section in the National Defense Authorization Act would weaken safety measures near Ronald Reagan Washington National Airport. NTSB Chair Jennifer Homendy specifically cited the board’s investigation into the January 29 collision between an Army helicopter and a commercial jet that killed 67 people. The investigation found that the military helicopter was not using enhanced tracking technology. The recently passed defense authorization bill creates a waiver for military aircraft to turn off their enhanced tracking software while flying on national security missions through parts of the Washington, D.C., airspace.

Jennifer Homendy : “This is a significant, significant safety setback. It represents an unacceptable risk to the flying public, to commercial and military aircraft crews and to the residents in the region. It’s also an unthinkable dismissal of our investigation and of 67 families, 67 families who lost loved ones in a tragedy that was entirely preventable. This is shameful.”

Federal Reserve Votes to Cut Interest Rates by Quarter Point

Dec 11, 2025

In economic news, the Federal Reserve voted Wednesday to cut interest rates by a quarter point for the third time this year. But the vote to reduce rates was split 9 to 3; usually the Fed votes unanimously when making major changes to the interest rate. This comes as the U.S. economy is reeling from tariffs, immigration crackdowns and cuts to government spending. And despite inflation and unemployment ticking up in September, not to mention four months of job losses over the past six months, President Trump offered an optimistic assessment of the U.S. economy.

Dasha Burns : “But I do want to talk about the economy, sir, here at home. And I wonder what grade you would give your economy.”

President Donald Trump : “A-plus.”

Dasha Burns : “A-plus?”

President Donald Trump : “Yeah, A-plus-plus-plus-plus-plus.”

President Trump was interviewed by Politico’s Dasha Burns.

200+ Environmental Groups Demand Halt to Construction of New U.S. Data Centers

Dec 11, 2025

More than 200 environmental groups are demanding a national moratorium on the construction of data centers in the U.S until new regulations are put in place. In an open letter addressed to Congress, the groups, which include Greenpeace, Friends of the Earth and Food & Water Watch, write, “The rapid, largely unregulated rise of datacenters to fuel the AI and crypto frenzy is disrupting communities across the country and threatening Americans’ economic, environmental, climate and water security.” Vermont’s independent Senator Bernie Sanders urged opponents of AI data centers to keep up the pressure against elected officials.

Sen. Bernie Sanders : “In community after community, Americans are fighting back against data centers being built by some of the largest and most powerful corporations in the world. They are opposing the destruction of their local environment, soaring electric bills and the diversion of scarce water supplies. Nationally, how will continued construction of AI data centers impact our environment?”

Report: Wealthiest 0.001% Hold Three Times More Wealth Than the Poorest Half of Humanity

Dec 11, 2025

A new report on global inequality shows the wealthiest 0.001% hold three times more wealth than the poorest half of humanity. Publishers of the World Inequality Report say their findings show the global wealth gap is much larger than most people imagine, with fewer than 60,000 multimillionaires and billionaires holding unprecedented financial power, while billions of the world’s poor remain cut off from even basic economic stability.

Federal Judge Orders Trump Administration to End Deployment of Troops to Los Angeles

Dec 11, 2025

A federal judge in California has ordered the Trump administration to end its deployment of National Guard forces to Los Angeles and to return control of the troops to Governor Gavin Newsom. District Judge Charles Breyer issued the preliminary injunction on Wednesday, after rejecting government claims that protests against Trump’s immigration crackdown in L.A. amounted to a “rebellion.” But Judge Breyer put the decision on hold until next Monday to give the Trump administration time to appeal.

Trump Launches “Trump Gold Card” Visa Program for Wealthy Noncitizens

Dec 11, 2025

President Trump has officially launched a visa program that provides a pathway for wealthy noncitizens to get expedited permission to live and work in the United States. For a $1 million payment, visitors can obtain a “Trump Gold Card” that promises to expedite U.S. residency applications “in record time.” The administration says it’ll soon offer a $5 million “Trump Platinum Card” allowing visitors to avoid paying some U.S. taxes.

Separately, new U.S. Customs and Border Protection rules published this week would require visitors from 42 countries on the visa waiver program to provide up to five years of their social media history, along with telephone numbers, email addresses and biometric data including DNA , face, fingerprint and iris scans.

Trump Administration Abruptly Cancels Naturalization Ceremonies for Immigrants from 19 Nations

Dec 11, 2025

The Trump administration recently told green card holders on the cusp of becoming U.S. citizens that their naturalization ceremonies had been canceled. Among those affected were immigrants who lined up at Boston’s Faneuil Hall last week and prepared to pledge allegiance to the United States. The group Project Citizenship told radio station WGBH , “officers were asking everyone what country they were from, and if they said a certain country, they were told to step out of line and that their oath ceremonies were canceled.”

At Least 33 People Killed After Myanmar Military Bombs Hospital

Dec 11, 2025

Image Credit: AP photo

In Burma, at least 33 people were killed after forces loyal to the country’s military leaders bombed a hospital in Rakhine state. The airstrike left dozens of people injured, including 27 in critical condition. The attack came on International Human Rights Day and ahead of elections set for the end of December. Burmese opposition groups are boycotting the election, after major political parties were barred from running by the ruling military junta.

Bolivia’s Former President Luis Arce Arrested over Alleged Graft

Dec 11, 2025

Bolivia’s former President Luis Arce has been arrested in the capital La Paz as part of the government’s investigation into alleged graft. Arce is accused of authorizing transfers from the public treasury to the personal accounts of political leaders when he served as economy minister under former President Evo Morales.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Non-commercial news needs your support

We rely on contributions from our viewers and listeners to do our work.
Please do your part today.

Make a donation