CiviCRM Scotland Meetup, Thursday 29 January 2026

CiviCRM
civicrm.org
2025-12-09 11:10:13
In this Meetup, we explore CiviBooking - an extension for organisations that hire out rooms or resources that they want to track through CiviCRM. We're delighted that Mathieu Lu, Co-founder at Coop SymbioTIC (Montreal) and a maintainer of the CiviBooking extension, will join us to discuss what CiviB...
Original Article

In this Meetup, we explore CiviBooking - an extension for organisations that hire out rooms or resources that they want to track through CiviCRM. We're delighted that Mathieu Lu, Co-founder at Coop SymbioTIC (Montreal) and a maintainer of the CiviBooking extension, will join us to discuss what CiviBooking can do.

For existing CiviCRM users, there will be opportunities to meet and discuss CiviCRM with other organisations using the software in their day-to-day work, and to ask questions of experts.

You are invited to join us in person or online. The event is free, conveniently situated at The Melting Pot, next to Edinburgh Waverley train station - and there will be tea and biscuits!

Hosted by CiviCRM Bronze Partner, Pooka & Co .

Stop Breaking TLS

Lobsters
www.markround.com
2025-12-10 07:04:32
Comments...
Original Article

Updated:

Rant ahead: I hate TLS “Inspection” software with a burning passion and I wish we collectively as an industry would just knock it the fuck off and stop pretending it’s some great security benefit. Every time I encounter it, in whatever form, it’s a gigantic headache that makes everyone’s life worse off and as far as I am concerned offers next to zero tangible benefits.

For those blissfully unaware, this is a class of “security” software or appliance that is supposed to let organisations monitor all encrypted traffic. It does this by inserting itself in the middle of traffic, stripping the encryption off so it can inspect it and then re-signing it with its own certificate. If that sounds familiar, it’s because it’s a widely known class of attack - the Man In The Middle attack. Great stuff, we’re literally deploying the exact attack vector that TLS was designed to prevent, but slapping a “security” label on it.

Firstly, it undermines one of the most important protocols of the modern Internet as it deliberately breaks all the guarantees that TLS encryption is supposed to offer. If the MITM certificate is installed everywhere, your company can intercept and monitor everything you say and do. Consider the ramifications of that - confidential messages to HR, medical information, insider trading information, your banking sessions - would you feel happy BCC’ing every single email to your IT department? Would you print out your therapy notes and pin them to the kitchen notice board?

But even ignoring the philosophical arguments about privacy and trust, I argue it actively makes your security worse . Consider this - what is the likelihood of every certificate authority on the Internet having their private keys compromised simultaneously? I’d wager that’s almost at the whatever is the statistics equivalent of the Planck length level of probability.

On the other hand, what’s the chance of your company’s MITM private key getting compromised by an attacker? Even if you completely trust your IT team and vendor (and if you do, you clearly haven’t been paying attention to any tech news for oh… the last few decades), you have to admit that chance is a lot higher. And depending on the vendor or tech stack involved, it could be a LOT higher. One disgruntled employee, one unpatched vulnerability, one phishing email to the right admin and choo-choo, it’s all aboard the FAIL train. Now an attacker could have the keys to your entire kingdom.

Then there’s the practicalities of it. It’s simply a massive hassle. Different Operating Systems expect certificates in different formats (PEM? DER? PFX? P7B?) installed in different places with different tooling to manage it all. update-ca-certificates vs update-ca-trust is just the tip of the iceberg - and that’s just the OS level. You then have language runtimes (Java keystore anyone?) and the applications themselves that all need to be configured.

And the problem is compounded with modern cloud-native apps. In a Kubernetes cluster, as well as having to handle updating the node VM images and container runtimes, you’ll have dozens if not hundreds of different base images each of which has their own standards. Alpine uses a different certificate path than Ubuntu. Your Node app expects them somewhere else entirely. The various CRDs or Helm charts you are using may or may not support custom CA bundles, and if they do there’s no agreed-on standard.

Now I’m not saying that because a problem is hard we should simply give up, but even if the benefits were worth it the simple fact is even with the best tooling and automation, you are guaranteed to miss something. Whether it’s some obscure tool that has a custom keystore and non-standard tooling, a quick “one off” command in an ephemeral container, some app that uses certificate pinning or an aging switch firmware that doesn’t even support custom certificate bundles, something will slip through the cracks. And when it does, guess what happens?

Which brings me to my biggest peeve: it normalizes bad security practices. Given that you will never have 100% coverage of your CA certificate installation - particularly amongst your technical teams who will be using a multitude of different tools and platforms - you get developers and sysadmins used to TLS errors. Instead of treating each one as an anomaly and something to be investigated, you get used to just running with --insecure or curl -k because you just need to get shit done. Turning off certificate verification becomes a routine troubleshooting step. “Oh, it’s probably just the corporate proxy again” becomes the reflexive response to any TLS error. You’ve just trained your entire technical staff to ignore one of the most important security warnings on the Internet!

And don’t even get me started on the performance and availability implications. All your traffic now has to be decrypted and re-encrypted by your magic box. Hope you sized that appliance correctly! Hope it doesn’t become a single point of failure! Hope it supports all the latest TLS versions and cipher suites!

There are a multitude of ways to protect yourself that are not only less invasive but are often more effective because they’re designed for how modern infrastructure actually works. Anomaly detection, Zero Trust network architecture, EDR, Netflow analysis… You don’t need to create single points of failure, and you can actually work with modern cloud-native infrastructure instead of fighting it. Plus, y’know, there’s this AI thing which as it turns out is actually quite useful at analysing metadata and spotting odd behavioral patterns.

In my experience: TLS Inspection MITM is a gigantic administrative burden, it normalizes bad practice, it creates bottlenecks and availability risks, and actively worsens your security posture.

Just stop it already.

Revisiting "Let's Build a Compiler"

Hacker News
eli.thegreenplace.net
2025-12-10 06:22:19
Comments...
Original Article

There's an old compiler-building tutorial that has become part of the field's lore: the Let's Build a Compiler series by Jack Crenshaw (published between 1988 and 1995).

I ran into it in 2003 and was very impressed, but it's now 2025 and this tutorial is still being mentioned quite often in Hacker News threads . Why is that? Why does a tutorial from 35 years ago, built in Pascal and emitting Motorola 68000 assembly - technologies that are virtually unknown for the new generation of programmers - hold sway over compiler enthusiasts? I've decided to find out.

The tutorial is easily available and readable online , but just re-reading it seemed insufficient. So I've decided on meticulously translating the compilers built in it to Python and emit a more modern target - WebAssembly. It was an enjoyable process and I want to share the outcome and some insights gained along the way.

The result is this code repository . Of particular interest is the TUTORIAL.md file , which describes how each part in the original tutorial is mapped to my code. So if you want to read the original tutorial but play with code you can actually easily try on your own, feel free to follow my path.

A sample

To get a taste of the input language being compiled and the output my compiler generates, here's a sample program in the KISS language designed by Jack Crenshaw:

var X=0

 { sum from 0 to n-1 inclusive, and add to result }
 procedure addseq(n, ref result)
     var i, sum  { 0 initialized }
     while i < n
         sum = sum + i
         i = i + 1
     end
     result = result + sum
 end

 program testprog
 begin
     addseq(11, X)
 end
 .

It's from part 13 of the tutorial, so it showcases procedures along with control constructs like the while loop, and passing parameters both by value and by reference. Here's the WASM text generated by my compiler for part 13:

(module
  (memory 8)
  ;; Linear stack pointer. Used to pass parameters by ref.
  ;; Grows downwards (towards lower addresses).
  (global $__sp (mut i32) (i32.const 65536))

  (global $X (mut i32) (i32.const 0))

  (func $ADDSEQ (param $N i32) (param $RESULT i32)
    (local $I i32)
    (local $SUM i32)
    loop $loop1
      block $breakloop1
        local.get $I
        local.get $N
        i32.lt_s
        i32.eqz
        br_if $breakloop1
        local.get $SUM
        local.get $I
        i32.add
        local.set $SUM
        local.get $I
        i32.const 1
        i32.add
        local.set $I
        br $loop1
      end
    end
    local.get $RESULT
    local.get $RESULT
    i32.load
    local.get $SUM
    i32.add
    i32.store
  )

  (func $main (export "main") (result i32)
    i32.const 11
    global.get $__sp      ;; make space on stack
    i32.const 4
    i32.sub
    global.set $__sp
    global.get $__sp
    global.get $X
    i32.store
    global.get $__sp    ;; push address as parameter
    call $ADDSEQ
    ;; restore parameter X by ref
    global.get $__sp
    i32.load offset=0
    global.set $X
    ;; clean up stack for ref parameters
    global.get $__sp
    i32.const 4
    i32.add
    global.set $__sp
    global.get $X
  )
)

You'll notice that there is some trickiness in the emitted code w.r.t. handling the by-reference parameter (my previous post deals with this issue in more detail). In general, though, the emitted code is inefficient - there is close to 0 optimization applied.

Also, if you're very diligent you'll notice something odd about the global variable X - it seems to be implicitly returned by the generated main function. This is just a testing facility that makes my compiler easy to test. All the compilers are extensively tested - usually by running the generated WASM code [1] and verifying expected results.

Insights - what makes this tutorial so special?

While reading the original tutorial again, I had on opportunity to reminisce on what makes it so effective. Other than the very fluent and conversational writing style of Jack Crenshaw, I think it's a combination of two key factors:

  1. The tutorial builds a recursive-descent parser step by step, rather than giving a long preface on automata and table-based parser generators. When I first encountered it (in 2003), it was taken for granted that if you want to write a parser then lex + yacc are the way to go [2] . Following the development of a simple and clean hand-written parser was a revelation that wholly changed my approach to the subject; subsequently, hand-written recursive-descent parsers have been my go-to approach for almost 20 years now .
  2. Rather than getting stuck in front-end minutiae, the tutorial goes straight to generating working assembly code, from very early on. This was also a breath of fresh air for engineers who grew up with more traditional courses where you spend 90% of the time on parsing, type checking and other semantic analysis and often run entirely out of steam by the time code generation is taught.

To be honest, I don't think either of these are a big problem with modern resources, but back in the day the tutorial clearly hit the right nerve with many people.

What else does it teach us?

Jack Crenshaw's tutorial takes the syntax-directed translation approach, where code is emitted while parsing , without having to divide the compiler into explicit phases with IRs. As I said above, this is a fantastic approach for getting started, but in the latter parts of the tutorial it starts showing its limitations. Especially once we get to types, it becomes painfully obvious that it would be very nice if we knew the types of expressions before we generate code for them.

I don't know if this is implicated in Jack Crenshaw's abandoning the tutorial at some point after part 14, but it may very well be. He keeps writing how the emitted code is clearly sub-optimal [3] and can be improved, but IMHO it's just not that easy to improve using the syntax-directed translation strategy. With perfect hindsight vision, I would probably use Part 14 (types) as a turning point - emitting some kind of AST from the parser and then doing simple type checking and analysis on that AST prior to generating code from it.

Conclusion

All in all, the original tutorial remains a wonderfully readable introduction to building compilers. This post and the GitHub repository it describes are a modest contribution that aims to improve the experience of folks reading the original tutorial today and not willing to use obsolete technologies. As always, let me know if you run into any issues or have questions!


[1] This is done using the Python bindings to wasmtime .
[2] By the way, gcc switched from YACC to hand-written recursive-descent parsing in the 2004-2006 timeframe, and Clang has been implemented with a recursive-descent parser from the start (2007).
[3]

Concretely: when we compile subexpr1 + subexpr2 and the two sides have different types, it would be mighty nice to know that before we actually generate the code for both sub-expressions. But the syntax-directed translation approach just doesn't work that way.

To be clear: it's easy to generate working code; it's just not easy to generate optimal code without some sort of type analysis that's done before code is actually generated.

How Google Maps quietly allocates survival across London’s restaurants - and how I built a dashboard to see through it

Lobsters
laurenleek.substack.com
2025-12-10 06:09:43
Comments...
Original Article

I needed a restaurant recommendation, so I did what every normal person would do: I scraped every single restaurant in Greater London and built a machine-learning model.

It started as a very reasonable problem. I was tired of doom-scrolling Google Maps, trying to disentangle genuinely good food from whatever the algorithm had decided to push at me that day. Somewhere along the way, the project stopped being about dinner and became about something slightly more unhinged: how digital platforms quietly redistribute economic survival across cities.

Because once you start looking at London’s restaurant scene through data, you stop seeing all those cute independents and hot new openings. You start seeing an algorithmic market - one where visibility compounds, demand snowballs, and who gets to survive is increasingly decided by code.

The public story of Google Maps is that it passively reflects “what people like.” More stars, more reviews, better food. But that framing obscures how the platform actually operates. Google Maps is not just indexing demand - it is actively organising it through a ranking system built on a small number of core signals that Google itself has publicly acknowledged: relevance, distance, and prominence.

“Relevance” is inferred from text matching between your search query and business metadata. “Distance” is purely spatial. But “prominence” is where the political economy begins. Google defines prominence using signals such as review volume, review velocity, average rating, brand recognition, and broader web visibility. In other words, it is not just what people think of a place - it is how often people interact with it, talk about it, and already recognise it.

Visibility on these ranked lists determines foot traffic. Foot traffic determines how quickly reviews accumulate. Review accumulation then feeds directly back into the prominence signal. The system compounds. Early discovery generates demand. Demand generates data. Data generates future discovery. This creates a cumulative-advantage dynamic that looks remarkably similar to the way capital compounds in financial markets. This is essentially Robert Merton’s Matthew Effect applied to kebab shops - ‘unto every one that hath shall be given.’

This disproportionately rewards chains and already-central venues. Chains benefit from cross-location brand recognition. High-footfall areas generate reviews faster, meaning venues in those zones climb the prominence ranking more quickly even at identical underlying quality. By contrast, new independents face a classic cold-start problem: without reviews they are hard to find, and without being found they struggle to accumulate reviews at all. What looks like neutral consumer choice is therefore better understood as algorithmically mediated market design .

In economics, this dynamic closely resembles the logic of a market maker: an intermediary that does not merely reflect underlying supply and demand, but actively shapes liquidity, matching, and price discovery. Platforms like Google Maps perform an analogous function for local services by controlling visibility rather than prices directly. In the language of digital economics, ranking algorithms act as attention allocators, steering demand toward some firms and away from others.

If Google Maps now acts as a kind of market maker for urban demand, the obvious next question is: what would the city look like without that amplification layer? In other words, how do you separate a restaurant’s intrinsic performance from the visibility effects of the platform itself?

To get at that, I built a machine-learning model - a gradient-boosted decision tree (for the ML crowd: HistGradientBoostingRegressor from scikit-learn) - to predict what a restaurant’s Google rating should be, given only its structural characteristics. This class of model is designed for large, messy, mixed-type tabular data and is particularly good at capturing interaction effects, without me having to specify those by hand. Features include how many reviews it has (log-transformed to reflect diminishing returns to attention), what cuisine it serves, whether it is part of a chain or an independent, its price level, broad venue types (restaurant, café, takeaway, bar), and where it sits in the city via a spatial grid.

Quick aside: for a subset of places I also scraped review text, languages, and photos. But for this first full-city run I stayed within the Google Maps API free tier - partly for reproducibility, partly because previous grid-scraping adventures taught me that cloud bills compound faster than review counts. So, for future versions, more features will only improve things. In particular, who is doing the reviewing matters. A five-star review of an Indian restaurant written in Hindi probably carries a different signal than one written by someone ordering chips with everything. (No judgment of white British people ofc…)

One practical problem I ran into early on is that Google Maps is surprisingly bad at categorising cuisines. A huge share of restaurants are labelled vaguely (“restaurant”, “cafe”, “meal takeaway”), inconsistently, or just incorrectly. So I ended up building a separate cuisine-classification model that predicts cuisine from restaurant names, menu language, and review text where available. In other words, the cuisine filters in the dashboard are not just Google’s tags - they’re machine-learned. This matters more than it might sound: if you misclassify cuisines, you misread diversity, clustering, and who actually competes with whom on the high street. Btw, I briefly considered classifying Pret A Manger as French, just to see if it would make any French people angrier at me than they already are. I didn’t. But I thought about it.

Before any modelling happens, all features go through a standard preprocessing pipeline - imputation, encoding, the usual. Crucially, the model is trained only to learn the mapping between observable platform-visible features and ratings. This allows me to generate a counterfactual expected rating for each restaurant - what the platform would typically assign under those structural conditions. The difference between a restaurant’s real rating and this predicted rating is what I call the rating residual. A positive residual means the restaurant performs materially better than its platform baseline would suggest. A negative residual means it underperforms relative to what the algorithm normally rewards. This is not a perfect measure of food quality - but it is a powerful measure of algorithmic mispricing: where social or culinary value diverges from what the platform structurally amplifies.

One caveat: some restaurants pay for promoted pins or local-search ads. Because paid visibility isn’t publicly disclosed, I can’t estimate how many - which is itself a sign of how opaque platform influence has become. My residuals may partly reflect ad spend I can’t observe.

To summarise this, I built the London food dashboard. The dashboard currently allows users to search by name and filter by underrated gems (identified by my machine learning algorithm), cuisine, borough, price level, min rating, and review volume. It is still very much a version-one prototype - but it is already a working microscope into London’s algorithmic food economy.

If you want to explore it yourself, you can find it on my personal website at: laurenleek.eu/food-map .

Naturally, I immediately stress-tested it on my favourite part of London: Islington (maybe all this promo - also in my previous Substack on UK segregation - makes me qualify for a council tax rebate? - I’m looking at you Jeremy Corbyn…). I switched on my “underrated gems” filter - that’s the ML residual at work - set a minimum rating and review count, exclude the eye-wateringly expensive options, and let the bubbles guide me. Bigger, darker bubbles mean places my model thinks the algorithm is undervaluing.

And just like that, I had dinner plans. Do try it yourself.

Btw, this is very much still a beta version - which means bugs, blind spots, and lots of room to grow. If something looks odd, missing, or wrong, please leave feature ideas and suggestions in the comments or via the comments on my website. Unlike the VS Code GitHub tracker and its 13.8k open issues, I really do read them.

But restaurants don’t fail alone - they fail in ecosystems. I also wanted to understand what happens when platform dynamics scale up from restaurants to entire neighbourhood food ecosystems. So I added a second modelling layer.

First, I aggregate restaurants into small spatial cells (the hexagons you see on the maps - because squares are for people who haven’t thought hard enough about edge effects) and compute summary features for each area: restaurant density, mean rating, mean residual, total reviews, chain share, cuisine entropy, and price level. I then standardise these and run principal component analysis (PCA) to compress them into a single continuous hub score that captures overall “restaurant ecosystem strength” in one dimension. Finally, I apply K-means clustering to the same feature space to classify areas into four structural types: elite , strong , everyday , and weak hubs.

At first glance, the patterns look comfortingly familiar. Central London dominates. Of course it does. But what matters is not just where the hubs are - it’s what kind of hubs they are. Using the full hub score rather than raw ratings alone, I identify the five most structurally powerful restaurant hubs in London. They are the places where density, algorithmic attention, independent survival, and consumer spending power all line up at once. They are labeled on the maps. I am deliberately refusing to rank them loudly in prose in order to avoid starting neighbourhood wars at scale (and to not disappoint Islington) - but the visual story is already extremely clear.

Overlaying this with the cuisine density panels reveals something even sharper. London’s culinary diversity is not evenly distributed across its platform economy. Migrant cuisines cluster strongly in parts of the city where algorithmic visibility is structurally weaker. Italian, Indian, Turkish, Chinese, Thai, British, Japanese, French, American, and fish-and-chips all trace distinct settlement histories, labour networks, retail formats, and relationships to capital and rent. Some cuisines form long, contiguous corridors. Others appear as punctuated clusters tied to specific high streets or income brackets.

Cuisine diversity, in other words, is not just about taste. It is about where families settled, which high streets remained affordable long enough for a second generation to open businesses, and which parts of the city experienced displacement before culinary ecosystems could mature. (If this part especially speaks to you, I go much deeper into it in Food for thought: local restaurant diversity meets migration ).

The Take-Away and Some Unwanted Policy Advice

This project started as a search problem and ended as something more. The most important result isn’t which neighbourhood tops the rankings - it’s the realisation that platforms now quietly structure survival in everyday urban markets. London’s restaurant scene is no longer organised by taste alone. It is organised by visibility that compounds, rent that rises when discovery arrives, and algorithms that allocate attention long before consumers ever show up. What looks like “choice” is increasingly the downstream effect of ranking systems.

For policy, that shifts the frame. If discovery now shapes small-business survival, then competition, fairness, and urban regeneration can no longer ignore platform ranking systems. Councils can rebuild streets and liberalise licensing all they like - but algorithmic invisibility can still leave places economically stranded. Platform transparency and auditability are no longer niche tech debates; they are quietly becoming tools of local economic policy. At minimum, ranking algorithms with this much economic consequence should be auditable. We audit financial markets. We should audit attention markets too.

For a navigation app, Google maps has a remarkable amount of power.
Just saying.

I’m also working on other maps (including a map of the best cycling and running routes with excellent cafés along the way, because I have needs). More broadly, I’m investing more and more time into building higher-quality public data projects. If you have an idea you’d like to see built - pitch it to me. And if you enjoy this kind of work, you can always Buy Me A Coffee or subscribe to help fund the next round of over-engineered maps.

Share

Discussion about this post

Ready for more?

Are the Three Musketeers allergic to muskets?(2014)

Hacker News
www.ox.ac.uk
2025-12-10 06:08:50
Comments...
Original Article

Three Musketeers

The Three Musketeers has been updated for the small screen by the BBC.

The BBC's new drama series The Musketeers – adapted from Alexandre Dumas' novel Les Trois Mousquetaires – made its debut on Sunday evening. Ahead of the screening, Dr Simon Kemp , Oxford University Fellow and Tutor in French, tackled the curious question of  why the musketeers appear to have an aversion to muskets...

"So here it comes. Peter Capaldi – Malcolm Tucker as was, Doctor Who as shortly will be – is twirling his moustache as Cardinal Richelieu in trailers for the much-heralded BBC adaptation of Alexandre Dumas' Les Trois Mousquetaires (1844). It's always good to see British TV take on French literary classics. Let's hope The Musketeers has a little more in common with its source material than the BBC's other recent effort, The Paradise , for which I'd be surprised if the producers were able to put up the subtitle 'based on the novel by Émile Zola' without blushing.

"At any rate, the Dumas adaptation looks exciting, with plenty of cape-swishing, sword-fighting, smouldering looks and death-defying leaps. Plus one element that is markedly more prevalent than in the book itself: gunfire. One of the odder things about Dumas' novel for the modern reader is its singular lack of muskets.

"In the mid-1620s, when the story is set, the Mousquetaires are the household guard of the French king, Louis XIII, an elite force trained for the battlefield as well as for the protection of the monarch and his family in peacetime. They are named for their specialist training in the use of the musket ( mousquet ), an early firearm originally developed in Spain at the end of the previous century under the name moschetto or 'sparrow-hawk'. Muskets were long-barrelled guns, quite unlike the pistols shown in the trailer, and fired by a 'matchlock' mechanism of holding a match or burning cord to a small hole leading to the powder chamber. By the 1620s they were not quite as cumbersome as the Spanish originals, which needed to have their barrels supported on a forked stick, but they were still pretty unwieldy devices.

"There are lots of weapons in the opening chapters of Les Trois Mousquetaires , where D'Artagnan travels to the barracks and challenges almost everyone he meets along the way to a duel (including all three of the musketeers). Lots of sword-fighting, but no muskets in sight. One of the musketeers has nicknamed his manservant mousequeton , or 'little musket', and that is as near as we get to a gun until page 429 of the Folio edition, when an actual mousqueton makes its first appearance. A mousqueton is not quite a musket, though, and in any case it's not one of the musketeers who is holding it.

"The siege of La Rochelle in the later part of the story seems a more propitious setting for firearms, and indeed, as soon as he arrives at the camp, D'Artagnan spies what appears to be a musket pointing at him from an ambush and flees, suffering only a hole to the hat. Examining the bullet-hole, he discovers 'la balle n'était pas une balle de mousquet, c'était une balle d'arquebuse ' ('the bullet was not from a musket, it was an arquebuse bullet', arquebuse being an earlier type of firearm). We are now 586 pages into the story, and starting to wonder if Dumas is playing a game with us.

"The suspicion is heightened when the musketeers take a jaunt into no man's land for some secret scheming away from the camp: ' Il me semble que pour une pareille expedition, nous aurions dû au moins emporter nos mousquets ,' frets Porthos on page 639 ('It seems to me that we ought to at least have taken our muskets along on an expedition like this'). ' Vous êtes un niais, ami Porthos; pourquoi nous charger d'un fardeau inutile? ' scoffs Athos in return ('You're a fool, Porthos, my friend. Why would we weight ourselves down with useless burdens?').

"The key to the mystery of the missing muskets is in these lines. Their absence from the novel up to this point is simply for the historical reason that the heavy and dangerous weapons were appropriate for the battlefield, not for the duties and skirmishes of peace-time Paris. Even when his heroes are mobilized, Dumas remains reluctant to give his musketeers their muskets. Remember that, writing in the 1840s, Dumas is closer in time to us today than he is to the period he's writing about, and his gaze back to the 17th century is often more drawn to romance than historical accuracy (as the cheerfully pedantic footnotes in my edition point out on every other page).

"For Dumas, the charm of his chosen period lies in the skill and daring of the accomplished swordsman, and his breathless narrative can wring far more excitement from a well-matched duel of blades than it could from a military gun battle. Heroism in Dumas is to be found in noble combat, staring your opponent in the eye as you match his deadly blade with your own, not in the clumsy long-range slaughter of unknowns. Musketeers his heroes must be, in order that they might belong to the royal guard and thus play a role in the dark conspiracies hatched around the King, the Queen and her English lover by Cardinal Richelieu, the power behind the throne. But the muskets themselves are surplus to requirements.

"Dumas does relent a little on his musket-phobia by the end of the novel. On page 645, the musketless musketeers fire at their enemies using weapons grabbed from corpses. And finally, on page 705, when Richelieu catches the four friends conspiring on the beach, we are at last granted a glimpse of the soldiers' own guns: ' [Athos] montra du doigt au cardinal les quatre mousquets en faisceau près du tambour sur lequel étaient les cartes et les dès ' ('He pointed out to the cardinal the four muskets stacked next to the drum on which lay the cards and dice').

"As far as I can make out, this is the only point at which we see the musketeers with their muskets in the whole story, and it seems a fitting way to present them to the reader: lying idle while the musketeers are occupied with other, more important amusements."

This post originally appeared on the outreach blog of the French sub-faculty at Oxford University.

Do Not Optimize Away

Lobsters
matklad.github.io
2025-12-10 05:07:23
Comments...
Original Article

Compilers are sneaky beasts. If you time code like this:

var total: u32 = 0;
for (0..N) |i| total += i;
print("total={}", .{total});

You will discover that LLVM is as smart as a little kid named Gauss, and replaces the summation with an equivalent formula N ( N + 1 ) 2 .

What’s more, if you write something more complicated like total += i + 2*i*i - i*i*i , you’ll see that LLVM figures out a closed-form expression for that as well (a generalization of the Gauss trick I proudly figured out in 11th grade). See for yourself: https://godbolt.org/z/T9EcTb8zq

Usually, this kind of thing is desirable — code runs faster! Except when you are trying to benchmark your code, and instead end up benchmarking an elaborate no-op.

There are two pitfalls with benchmarking. First , in

const start = now();
_ = computation()
const elapsed = now() - start;

a reasonable compiler can notice that computation ’s result is not used, and optimize the entire computation away.

Second , in

const parameter_a = 1_000_000;
const parameter_b = 1_000;

const start = now();
_ = computation(parameter_a, parameter_b);
const elapsed = now() - start;

even if the computation is not elided as a whole, compiler can constant-fold parts of it, taking advantage of the fact that values of parameters are known at compile time.

Time To Be Killing The Dragon Again

Usually languages provide some sort of an explicit “please do not optimize this away” function, like Rust’s hint::black_box or Zig’s mem.doNotOptimizeAway , but they always felt like snake oil to me:

  • Their semantics is tricky. It is sort of impossible to explain what exactly can and can not be optimized: the whole compilation pipeline is based on erasing everything about the original form of the code, maintaining only the semantics.
  • There’s a simpler and more direct way to achieve the desired result. Just open the box and check if the cat is there!

It’s easier to explain via an example. Let’s say I am benchmarking binary search:

fn insertion_point(xs: []const u32, x: u32) usize { ... }

I would use the following benchmarking scaffold:

fn benchmark(arena: Allocator) !void {
    const element_count =
        try parameter("element_count", 1_000_000);
    const search_count =
        try parameter("search_count", 10_000);

    const elements: []const u32 =
        make_elements(arena, element_count);
    const searches: []const u32 =
        make_searches(arena, search_count);

    const start = now();
    var hash: u32 = 0;
    for (searches) |key| {
        hash +%= insertion_point(elements, key);
    }
    const elapsed = now().duration_since(start);

    print("hash={}\n", .{hash});
    print("elapsed={}\n", .{elapsed});
}

fn parameter(comptime name: []const u8, default: u64) !u64 {
    const value = if (process.hasEnvVarConstant(name))
        try process.parseEnvVarInt(name, u64, 10)
    else
        default;
    print(name ++ "={}\n", .{value});
}

On the input side, the parameter function takes a symbolic name and a default value. It looks up the value among the environmental variables, with fallback. Because the value can be specified at runtime, compiler can’t optimize assuming a particular constant. And you also get a convenient way to re-run benchmark with a different set of parameters without recompiling.

On the output side, we compute an (extremely weak) “hash” of the results. For our binary search — just the sum of all the indexes. Then we print this hash together with the timing information. Because we use the results of our computation, compiler can’t optimize them away!

Similarly to the parameter function, we also get a bonus feature for free. You know who also loves making code faster by deleting “unnecessary” functionality? I do! Though I am not as smart as a compiler, and usually end up deleting code that actually is required to get the right answer. With the hash, if I mess my optimization work to the point of getting a wrong answer, I immediately see that reflected in an unexpected value of the hash.

Consider avoiding black boxes for your next benchmark. Instead, stick to natural anti-optimizing-compiler remedies:

  • Make input parameters runtime overridable (with compile time defaults),
  • print the result (or the hash thereof).

Dependable C

Hacker News
dependablec.org
2025-12-10 04:29:30
Comments...

'Source available' is not open source (and that's okay)

Hacker News
dri.es
2025-12-10 03:33:14
Comments...
Original Article

I have spent twenty years working on open source sustainability, so watching a fight ignite between Ruby on Rails creator David Heinemeier Hansson and WordPress founding developer Matt Mullenweg this week felt uncomfortably familiar in a way I wish it didn't.

David Heinemeier Hansson (also known as DHH) released a new kanban tool, Fizzy, this week and called it open source .

People quickly pointed out that the O'Saasy license that Fizzy is released under blocks others from offering a competing SaaS version, which violates the Open Source Initiative's definition . When challenged, he brushed it off on X and said, "You know this is just some shit people made up, right?". He followed with "Open source is when the source is open. Simple as that".

This morning, Matt Mullenweg rightly pushed back . He argued that you can't ignore the Open Source Initiative definition. He compared it to North Korea calling itself a democracy. A clumsy analogy, but the point stands.

Look, the term "open source" has a specific, shared meaning. It is not a loose idea and not something you can repurpose for marketing. Thousands of people shaped that definition over decades. Ignoring that work means benefiting from the community while setting aside its rules.

This whole debate becomes spicier knowing that DHH was on Lex Fridman's podcast only a few months ago, appealing to the spirit and ethics of open source to criticize Matt's handling of the WP Engine dispute . If the definition is just "shit people made up", what spirit was Matt violating?

The definition debate matters, but the bigger issue here is sustainability. DHH's choice of license reacts to a real pressure in open source: many companies make real money from open source software while leaving the hard work of building and maintaining it to others.

This tension also played a role in Matt's fight with WP Engine , so he and DHH share some common ground, even if they handle it differently. We see the same thing in Drupal, where the biggest companies do not always contribute at the same level.

DHH can experiment because Fizzy is new. He can choose a different license and see how it works. Matt can't as WordPress has been under the GPL for more than twenty years. Changing that now is virtually impossible.

Both conversations are important, but watching two of the most influential people in open source argue about definitions while we all wrestle with free riders feels a bit like firefighters arguing about hose lengths during a fire.

The definition debate matters because open source only works when we agree on what the term means. But sustainability decides whether projects like Drupal, WordPress, and Ruby on Rails keep thriving for decades to come. That is the conversation we need to have.

In Drupal, we are experimenting with contribution credits and with guiding work toward companies that support the project. These ideas have helped, but also have not solved the imbalance.

Six years ago I wrote in my Makers and Takers blog post that I would love to see new licenses that "encourage software free riding", but "discourage customer free riding". O'Saasy is exactly that kind of experiment.

A more accurate framing would be that Fizzy is source available . You can read it, run it, and modify it. But DHH's company is keeping the SaaS rights because they want to be able to build a sustainable business. That is defensible and generous, but it is not open source.

I still do not have the full answer to the open source sustainability problem. I have been wrestling with it for more than twenty years. But I do know the solution is not renaming the problem.

Some questions are worth asking, and answering:

  • How do we distinguish between companies that can't contribute and those that won't?
  • What actually changes corporate behavior: shame, self-interest, punitive action, exclusive benefits, or regulation?

If this latest fight nudges us away from word games and toward these questions, some good may come from it.

The end of the kernel Rust experiment

Linux Weekly News
lwn.net
2025-12-10 02:57:53
The topic of the Rust experiment was just discussed at the annual Maintainers Summit. The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the "experimental" tag will be coming off. Congratul...
Original Article

[Posted December 10, 2025 by corbet]

The topic of the Rust experiment was just discussed at the annual Maintainers Summit. The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the "experimental" tag will be coming off. Congratulations are in order for all of the Rust-for-Linux team.

(Stay tuned for details in our Maintainers Summit coverage.)


Looking for guidance on improving an offline security tool I built

Lobsters
lobste.rs
2025-12-10 02:56:43
I’ve spent the last six months building an offline security assistant. It runs entirely locally and is meant to help with day to day pentest and blue team work without sending anything to the cloud. It grew out of my own learning process because I’m self taught and wanted to build something useful w...
Original Article

I’ve spent the last six months building an offline security assistant. It runs entirely locally and is meant to help with day to day pentest and blue team work without sending anything to the cloud. It grew out of my own learning process because I’m self taught and wanted to build something useful while improving my skills.

I’ve now reached the point where I’ve taken it as far as I can alone. I open sourced it because I need advice from people who know this space better than I do. I have tried posting about it elsewhere to get feedback but the posts either get removed or sink without any replies.

I am not trying to sell anything. I genuinely want technical guidance on how to present the project properly, what direction to take it in, and what the community thinks about the idea and execution. If anyone is willing to look at it and give honest feedback I would appreciate it. I can share the repository if that is appropriate here.

Starbucks Workers Are Still Without a Labor Deal Four Years After Their First Union Win. Here’s Why

Portside
portside.org
2025-12-10 00:49:25
Starbucks Workers Are Still Without a Labor Deal Four Years After Their First Union Win. Here’s Why Greg Tue, 12/09/2025 - 19:49 ...
Original Article

August Code works at the first ever Starbucks location to unionize in 2021. But four years after that vote, he and his co-workers in Buffalo, New York, are still waiting on their union contract.

“I would have imagined we would have seen a contract a long time ago,” Code told CNN. “To think we don’t have a contract four years later, yeah, that’s upsetting. I didn’t think we’d be at this point.”

Tuesday is the anniversary of the first union win at Starbucks . The union-organizing campaign there has been one of the biggest successes in the American labor movement in the past few years.

Concerns over working conditions during the pandemic spurred on younger workers, generally pro-union and who make up a large share of Starbucks’ workforce, to unionize. About 560 Starbucks locations have voted for union representation since that first vote four years ago, according to the union Starbucks Workers United. (An additional 90 stores that organized have closed amid a slew of store closings).

But despite the momentum, there is still no labor contract, a key goal of union representation. Contracts can further workers’ voices and improve wages, benefits and other working conditions.

US labor laws can’t help new unions force companies to reach a deal. The laws only require employers to bargain in “good faith,” meaning there are basically no penalties if companies drag out negotiations for years.

Liz Shuler, president of AFL-CIO, told CNN that the lack of a contract at Starbucks after four years is a sign labor laws need changing.

“People want to feel they’ve taken this risk and done it for a reason, and that would be to have a contract,” Shuler said. “I think they’ll get there. But it’s going to take some time because these corporations are able to withstand this kind of effort.”

Similar to Starbucks, other recent high-profile union campaigns haven’t yet reached a first contract.

That includes Amazon warehouse workers in Staten Island, New York, who voted in 2022 to form the tech giant’s first union. And the United Auto Workers union last year won the right to represent workers at a Volkswagen plant in Chattanooga, Tennessee – the first shot at organizing the approximately 150,000 US auto workers employed at nonunion plants.

Both sides blame the other for lack of contract

Companies often show little willingness to meet the unions’ bargaining demands, even after their workers vote for representation.

Amazon doesn’t even recognize the victory at its unionized warehouse, continuing to challenge the results. Rank-and-file members have authorized a strike at the US Volkswagen plant, but no date has been set.

Starbucks regularly argues that its employees don’t need a union since it pays better wages and benefits than many other retailers. The union is seeking seeking wage improvement, better staffing at stores and improved scheduling rules.

Its workers continue to win representation elections. But talks between the union and management have dragged on for so long that many workers who voted in early elections have already left the company.

The two sides appear far apart on any deal, with each blaming the other since mediated talks ended this past spring.

“This company responded in such a way from the onset that we knew it was going to be a fight,” said Michelle Eisen, one of the leaders of that initial union campaign in Buffalo. She has since left the company after 15 years to work for the union.

Starbucks, meanwhile, insists it wants to reach a contract with the union.

“For months, we were at the bargaining table, working in good faith with Workers United and delegates from across the country to reach agreements that make sense for partners and for the long-term success of Starbucks,” Sara Kelly, a top Starbucks executive, told employees in a memo last month.

The union is waging an open-ended strike at about 150 stores that started on November 13 — also known as “Red Cup” day, one of Starbucks’ biggest promotional days every year.

“I truly believe this is the tipping point,” Eisen said “I’ve never seen workers as fired up as they are right now.”

Starbucks said that the strike did not affect sales that day and that the stores facing strikes are a small fraction of the 10,000 company-owned US stores. Less than 5% of Starbucks’ 240,000 front-line employees are union members.

But a union fight is another headache for Starbucks coming off years of declining sales and following hundreds of store closures in September. North American sales fell 2% over the 12 months ending in late September and would have fallen twice that much if not for increased prices. US tariffs have also boosted the price of coffee, which retails nearly 19% more than last year, according to the latest government data .

Difficult first contract is the norm

Failing to reach a quick first contract is not unique to Starbucks, or Amazon or Volkswagen.

Only 37% of newly formed unions reach an initial contract within a year, and 48% reach a deal within 18 months, according to ongoing research from by Johnnie Kallas, assistant professor of labor studies at the University of Illinois.

The American labor movement is seeking legislation that would help unions win that first contract more quickly.

Senator Josh Hawley, a Republican from Missouri, introduced legislation in March to impose binding arbitration if a newly formed union and the company can’t reach a contract within months.

“Workers are often prevented from enjoying the benefits of the union they voted to form when mega-corporations drag their feet, slow-walk contract negotiations, and try to erode support for the union,” Hawley said in a statement in March.

The bill has widespread Democratic support as well as a few other Republican co-sponsors. But the legislation has so far gone nowhere.

Despite the lack of legislation, Shuler voiced confidence the Starbucks union will eventually get the contract, especially because of the commitment of the union’s younger membership.

“I feel like they’re in it for the long haul,” she said.

Some of the union activists at Starbucks said the lack of a contract has made it easier to organize. That’s because it demonstrates the need for a union to improve conditions.

“It hasn’t slowed down our organizing efforts at all,” said Diego Franco, a striking union barista from Des Plaines, Illinois, and a member of the union’s bargaining committee.

Franco also expressed confidence in a win.

“Eventually, the company is going to cave and we’re going win the strong contract we’ve been fighting for – whether I’m still around or not,” he said.

Get on the Job and Organize: A Q&A With Jaz Brisack

Portside
portside.org
2025-12-10 00:39:14
Get on the Job and Organize: A Q&A With Jaz Brisack Judy Tue, 12/09/2025 - 19:39 ...
Original Article

Jaz Brisack helped lead the unionization of a Starbucks store in downtown Buffalo, New York | Malik Rainey / The New York Times

After years and years of declining union density in America, the beginning of the 2020s felt like a sea change. Upstart labor campaigns notched huge wins at Amazon, public support for unions reached new heights, and new organizing election petitions with the National Labor Relations Board soared.

Halfway through the decade though, the surge in labor organizing has not managed to slow the decline in Americans represented by a union. Why?

Pulling from experiences at the heart of union campaigns at Nissan, Tesla, and Starbucks, labor organizer Jaz Brisack details how the deck of American labor law is stacked in the favor of employers. But that isn’t the only thing holding the labor movement back — Brisack’s experiences also show how major labor unions can derail campaigns with onerous bureaucracy that restricts worker leadership.

The bulk of Get on the Job and Organize: Standing Up for a Better Workplace and a Better World , available here , focuses on Brisack’s time as co-founder of Starbucks Workers United and the campaign to organize cafes born in Buffalo, New York.

Brisack started at Starbucks as a “salt” — someone who gets a job in a workplace with the intent of unionizing it — and continues to train people eager to pursue that path at the Inside Organizer School . But the effort to unionize Starbucks was a salt and pepper campaign, as organic leaders soon emerged among Brisack’s colleagues.

Inequality.org sat down with Brisack earlier this month to discuss their book and the lessons the labor movement can draw from worker-led campaigns.

This interview has been edited for length and clarity.

Chris Mills Rodrigo: One theme across the campaigns you discuss in this book is the importance of winning the “right to organize” before anything else. Could you explain why that’s a prerequisite to workplace issues? How can bosses take advantage of more narrow issue campaigns?

Jaz Brisack: The right to organize almost sounds like a platitude sometimes, but it’s also the very core of what we’re fighting for. I think it sounds like a platitude because we often have unions and politicians giving lip service to the right to organize without actually committing to the fight of what it means to win the right to organize. Throughout labor history a huge piece of every fight for greater labor rights was the right to have democracy in the workplace, to have an independent organization where workers can advocate for themselves. That’s still what’s at the heart of the right to organize on every campaign, whether it’s Nissan, Starbucks or Tesla.

Companies have unilateral control if workers don’t have a union and companies want to maintain that control, so unions basically have to make it more difficult, more painful, more costly for a company to continue to insist on crushing the union rather than deal with sharing power. This also gets into a psychological question of how the management is thinking about this. Some of them are thinking about it as a business equation, but some of them — like Howard Schultz at Starbucks — are thinking about it as a referendum on their own leadership in a very personal way.

Companies will do just about anything rather than give up unilateral control. We’ve gotten the question of “what would you like to improve with a union?” on every campaign and our answer was always we want a voice on the job and we’ll get into this more at the bargaining table. Sometimes you have to talk with co-workers about things you want to change or tell the press about some of the conditions in the workplace. But I think companies will try to find out what workers would like to have changed, whether that’s a bad manager, whether that’s pay issues, whatever it is, and, often, will make any kinds of improvements that they can, short of actually giving workers an independent democratic body — that is, a union.

CMR: What do you think is behind that fear of giving up unilateral control?

JB: It’s really about the desire to control workers. Without a union, management has the final say on everything, and workers have basically no rights to their jobs or to how they want to work. Bosses can fire people for just about any reason, except for a protected reason. But even then, that’s a very hard thing to prove, and management can easily claim it’s for something else. This desire to have full power over workers and over the company is extremely motivating to corporations.

I think there’s secondary fears about whether the union will allow for flexibility, will the company be as profitable, etc. But often companies will spend more to crush the union than they would spend to actually recognize and negotiate a first contract. And so I think this question of control is the piece that’s really at the heart of the matter.

CMR: Does the current state of the NLRB — which was already rife with delays and heavily tilted to employers before the second Trump term — change the calculus around the right to organize or how you would approach a new union campaign?

JB: I think barely, if at all. Maybe it’s made me a little bit more open to card check , but I think that’s almost a semantics question. We’ve asked all of these companies, from Nissan to Starbucks to Ben and Jerry’s, etc., to sign the Fair Election Principles. The reason for doing so is that having an election is kind of psychologically considered even more of a gold standard, including by workers. The NLRB has historically been much better at administering elections than it has been at enforcing any other part of labor law. So having an election with neutrality, or with equal time for a union campaign, is much preferable to having a card check scenario where the company is fighting.

But I think the Starbucks campaign shows the limitations of even a “favorable” NLRB. Starbucks broke the law hundreds of times in each city, thousands of times across the country. I was fired in 2022; workers across the country are still awaiting reinstatement, awaiting back pay. Starbucks basically had no incentive not to break the law, and in fact, had every incentive to break the law and deal with the consequences later. Even at the best of times, the NLRB isn’t really sufficient to protect the right to organize. The laws are very weak. There’s no penalties, there’s only remedies.

I think winning the right to organize is not something that we’re going to get through the law. It’s something that we have to get through the court of public opinion, through consumer pressure. Exactly what the “hammer” is depends on the company and on their business model, etc. At Starbucks, we were pretty convinced it was a boycott and I think that’s just more true than ever, as the NLRB is either undermined or actively hostile.

CMR: That’s a perfect segue, because I wanted to talk about hammers next and the importance of having them to compel companies to the bargaining table. What about American labor unions makes them so reticent to even threaten the use of hammers?

JB: Million dollar question, if we could fix this one we would have a very different state of union density in the US. I think partly it’s that it’s very hard to actually commit to . It’s easy to call for a boycott on social media, and sometimes you get lucky with those like the solidarity with Palestine boycott of Starbucks, which I think exceeded our wildest expectations. But to really do a consumer boycott of a company you actually need to have people outside of stores, picket captains, resources. I think the Teamsters at Chipotle could have easily brought Chipotle down with a real consumer campaign. The UAW at Nissan — when Richard and his crew were testing the impact of student and church group pickets outside of Nissan dealerships the results were very encouraging, people were not buying cars once they knew what was going on. That could have potentially won us the right to organize at Nissan.

It’s partly a resource question and then partly a strategy question with Starbucks. Starbucks has 10,000 stores, you would have needed a presence at a huge number of those stores to ensure and force a financial impact.

Workers United is affiliated with SEIU, and there were different schools of thought within Workers United and SEIU about how you should even go about organizing: through NLRB elections and contracts versus a much more kind of legislative reform, wage based Fight for 15 style model. SEIU has tried to jump through hoops to reconcile the Starbucks campaign with their approach at Waffle House or McDonalds where they refuse to file for elections and are instead doing days of action and pressure campaigns. I think there’s also a fear of the very grassroots nature of these union campaigns where there’s been struggles over how much control workers would actually have of the campaigns versus decision makers within the union.

CMR: Unionized Starbucks workers went on strike starting November 13th, Red Cup Day, and more have joined since. How did we get to this point?

JB: This is really an extension of one of the Starbucks campaign strategy camp’s tactics of having national strikes on a consistent basis. The union and the company probably haven’t been that far off on a deal. We knew a while back that they really only had an impasse on economic issues. Everything else was mostly done. I think the real question is: could there be a deal on a contract that would actually win the right to organize at all of the stores versus kind of limiting the union to a minority? And I think that remains to be seen.

The campaign is still remarkably resilient. A lot of the workers who are on strike now are folks who came into the movement later and who are amazing leaders. It’s really impressive how much people have been able to withstand.

With this strike there’s been sort of a tentative call for a boycott with figures like Zohran saying don’t go to Starbucks. We should have had that energy in 2022 when Starbucks fired the Memphis Seven . Now there’s been sort of on-again, off-again momentum. Starbucks is not in the public eye the way we were in 2022 and the union’s hesitation around endorsing the Palestine boycott was definitely a cause of some lost momentum. But I think, better late than never, hopefully there is a contract, and then things can always improve from there.

CMR: What was the internal discussion like over calling for a boycott back then?

JB: In retrospect we should have just done it. The core organizing committee, which was still largely a Buffalo group but was expanding to other parts of the country, were the ones writing all the press releases, doing social media, and kind of controlling the public narrative. I think the decision making at that point was a bit muddy, SEIU wasn’t really in the picture yet, it was Workers United leadership and they were very hesitant that it’d be hard to enforce a boycott, that it might not work.

There was also a school of thought that striking was our only form of worker power and that boycotting was sort of a cop-out or not as militant of a strategy as striking over these issues. And then I think there was also a fear that it would hurt the organizing effort at other stores, which I would argue it did anyway, but we didn’t actually have the leverage that a boycott would have given us.

CMR: I was very fascinated with the internal conflict you seem to have had over organizing at Starbucks. As a campaign framing it makes sense to say you want the company to be better, even though in the back of your mind you would prefer there not to be a corporate behemoth dominating local cafes. How do you work through this tension?

JB: I had to wrestle with this a lot. Basically the way I approach it is: we’re unlikely to put Starbucks out of business. A couple of times it actually did seem more likely that Starbucks might go out of business rather than respect the right to organize. But I don’t think there’s any world in which Starbucks actually goes under. Like Walmart’s market share is now being threatened by Amazon, but it’s not like Walmart is going away. Starbucks may lose some of its ubiquity, but it is still too big of a player to fully get rid of.

My personal way of reconciling the positive rhetoric was just focusing on things I really did like about the job. I was an opener, the camaraderie with my coworkers was really great, the kind of people who are attracted to working at Starbucks are really great. I think it’s kind of true with baristas anywhere, but certainly among Starbucks workers there was just an immediate understanding and an immediate jargon that was universally shared and made for very easy, immediate bonding.

And then a lot of customers aren’t great, but there were fun parts of the job. I actually really enjoy coffee. Starbucks, unfortunately, doesn’t really allow you to do that much of the craft part. But we would find ways to learn about coffee and make good coffee by bringing in beans ourselves. So I was able to find ways not to lie when I was talking about loving my job.

We can’t fight all of the battles all at once. We have to find stepping stones to getting where we want. And the first step has to be really changing union density. One worker asked me very early on, I can’t remember if this is in the book, would Starbucks take us seriously if we said we loved the company instead of, you know, we want to overthrow capitalism. I was like, don’t worry, they will see the word union and they will understand what that means, because that’s what actually is threatening to them.

CMR: In the same vein, as a kind of wrap up question, what would you hope that other unions can learn from the Starbucks campaign, which remains one of the most exciting in recent memory?

JB: I would say it underscores the importance of calling the question on the right to organize very early. We were talking about doing this before we even voted in the first elections, and we decided to wait long enough to have a concrete election victory that we could then point to and say the workers have spoken and they want to union. I wouldn’t wait much longer than that. You don’t gain a lot of momentum over a longer period of time, and we have to get better as a labor movement about really acting as one big union. We had unions that were offering to adopt stores and take on picketing at various locations, and there was never really a willingness to even ask them to put that into practice.

The other main piece is that unions need to be less worried about control and more worried about throwing things at the wall and seeing what sticks. We are dying as a labor movement. We have existential threats from the government, from corporations, from all of the typical factors, but it all seems even more ramped up these days. So we need to be less worried about whether Tesla workers are organizing jurisdictionally with the right union, or whether Starbucks workers might somehow say the wrong thing at the bargaining table if they’re allowed to bargain at hundreds of stores simultaneously, or whether Chipotle workers should be allowed to do a national pressure campaign.

Let workers take autonomy in their own campaigns and see what works.

===

10 Years of Let's Encrypt

Simon Willison
simonwillison.net
2025-12-10 00:34:15
10 Years of Let's Encrypt Internet Security Research Group co-founder and Executive Director Josh Aas: On September 14, 2015, our first publicly-trusted certificate went live. [...] Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME...
Original Article

10 Years of Let's Encrypt ( via ) Internet Security Research Group co-founder and Executive Director Josh Aas:

On September 14, 2015, our first publicly-trusted certificate went live . [...] Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.

Their growth rate and numbers are wild:

In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day.

According to their stats the amount of Firefox traffic protected by HTTPS doubled from 39% at the start of 2016 to ~80% today. I think it's difficult to over-estimate the impact Let's Encrypt has had on the security of the web.

Six Myths About Rural America: How Conventional Wisdom Gets It Wrong

Portside
portside.org
2025-12-10 00:02:14
Six Myths About Rural America: How Conventional Wisdom Gets It Wrong Judy Tue, 12/09/2025 - 19:02 ...
Original Article

Dusk in downtown Lumberton, county seat in Robeson County, N.C., the most diverse rural county in America. | AP Photo/David Goldman

Roughly 1 in 5 Americans live in rural areas – places the federal government defines based on small populations and low housing density.

Yet many people understand rural America through stereotypes. Media and political conversations often use words or terms such as “fading,” “white,” “farming,” “traditional” and “politically uniform” to describe rural communities.

In reality, rural communities are far more varied. Getting these facts right matters because public debates, policies and resources – including money for programs – often rely on these assumptions, and misunderstandings can leave real needs neglected.

We are rural demographers at Louisiana State University and Syracuse University who study the causes and consequences of well-being in rural America. Here we outline six myths about rural America – a few among many – highlighted in our recent book “ Rural and Small-Town America: Context, Composition, and Complexities .”

Myth 1: Rural America is disappearing due to depopulation

Many people think rural America is emptying out. The story is more complicated. It’s true that from 2010 to 2020 most rural counties lost population. But about one-third grew , especially those near cities or those with lakes, mountains and other natural attractions. And there have been times, like in the 1970s and 1990s, when rural populations grew faster than cities – periods called “rural rebounds.

An important thing to know about rural population change is that the places defined as “rural” change over time. When a rural town grows enough, the U.S. Office of Management and Budget reclassifies it as “urban .” In other words, rural America isn’t disappearing – it’s changing and sometimes urbanizing.

Myth 2: Most rural Americans live on farms

Farming is still important in many rural places, but it’s no longer the way most rural Americans make a living. Today, roughly 6% of rural jobs are in agriculture . And most farm families also have members who work off-farm jobs , often for access to health insurance and retirement benefits.

A bigger source of employment in rural America is manufacturing. In fact, manufacturing plays a larger role as a share of jobs and earnings in rural areas than in cities . That also means that deindustrialization – steady job losses in manufacturing over the decades – has been especially painful in rural America. Unlike large cities with lots of employers, rural communities rely on just a few. When a rural plant or factory closes, the local impacts are often devastating .

The largest share of rural jobs today is in service-sector work , such as retail, food service, home health care and hospitality. These jobs often pay low wages, offer few benefits and have unstable hours, making it harder for many rural families to stay financially secure.

Myth 3: Only white people live in rural America

People often picture rural America as mostly white, but that’s not the full story . About 1 in 4 rural residents are nonwhite. Hispanic and Black people make up the largest shares, and Indigenous people have a greater portion of their population living in rural areas than any other racial group.

Rural America is also getting more racially and ethnically diverse every year. Young people are leading that change: About 1 in 3 rural children are nonwhite . The future of rural America is racially diverse, even if popular images don’t always show it.

Myth 4: Rural America is healthier than urban America

Many people imagine rural life as healthier than city life. But the opposite is true. People in rural areas die younger and at higher rates than people in cities. Scholars call this the “ rural mortality penalty,” and it has been widening for years . The COVID-19 pandemic made the gap even larger due to higher death rates in rural communities .

This isn’t just because rural areas have more older people. Rural working-age people, ages 25 to 64, are dying younger than their urban peers, and the gap is growing . This trend is being driven by nearly all major causes of death . Rural residents have higher rates of early death from cancers, heart disease, COVID-19, motor vehicle crashes, suicide, alcohol misuse, diabetes, stroke and pregnancy-related complications.

Myth 5: Rural families are more traditional than urban families

Images of rural life often evoke households in which married couples are raising children in traditional family structures. Historically, rural children were more likely to live with married parents. But that’s no longer the case .

Today, rural children are less likely than urban children to live with married parents and are more likely to live with cohabiting unmarried parents or in the care of grandparents or other relatives. Partly as a result, rural child poverty rates are higher than urban rates, and many rural families rely on safety-net supports such as the food aid program SNAP . Rural families are diverse, and many are economically vulnerable.

Myth 6: A new ‘rural revolt’ gave Donald Trump his presidential victories

Many rural voters have supported Donald Trump , but this didn’t happen overnight.

For much of the 20th century, Democrats drew major support from rural areas due to the party’s alignment with the working class and 100 years of single-party rule in the South spanning Reconstruction to the civil rights era.

However, social class and regional flips in voting patterns have meant rural voters have been shifting toward Republicans for nearly 50 years. The last time rural and urban residents voted within 1 percentage point of each other was in 1976, when Georgia peanut farmer and former governor Jimmy Carter was elected.

The partisan gap between rural and urban voters averaged 3 percentage points in the 1980s and 1990s, before growing to 10 percentage points in the 2000s and 20 percentage points in recent cycles. So, Trump’s support in rural America was not a new “revolt ” but part of a long-term trend.

And in 2024, the key geographic story wasn’t rural voters at all – it was the sharp drop in turnout in big cities. Both candidates got fewer urban votes than in 2020, with Kamala Harris capturing over 10 million fewer votes in major and medium-sized cities than Joe Biden had four years earlier.

Share of votes for the Republican presidential candidate in rural and urban counties, 1976-2024. Rural counties are nonmetropolitan and urban counties are metropolitan based on 2013 definitions. Excludes Alaska because it reports election results for election districts rather than counties.

===

Tim Slack Professor of Sociology, Louisiana State University

===

The Conversation has a distinctive model where we get over half of our readership by sharing our journalism with other media outlets. We get a byline, and we get to reach many more people – and thousands of websites and print publications benefit by getting fact-based, well-edited, interesting articles to bring to their readers. It’s a valuable service, especially when so many established news organizations are struggling to stay afloat – and small startups are fighting to get established. Editors love what we do and tell us so. Readers across the nation and the world benefit by reading journalism rooted in the expertise and knowledge of our experts, helping them make sense of a complex world. We’re expanding our efforts, too, into podcasts and video and social media and texting, as we strive to reach as many people as possible. We can only do this with your support. We can only give away our product to the public because people like you step up to make it possible. Thank you for helping us.

Joel Abrams

Director of Digital Strategy and Outreach

===

Devstral 2

Simon Willison
simonwillison.net
2025-12-09 23:58:27
Devstral 2 Two new models from Mistral today: Devstral 2 and Devstral Small 2 - both focused on powering coding agents such as Mistral's newly released Mistral Vibe which I wrote about earlier today. Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors a...
Original Article

Devstral 2 . Two new models from Mistral today: Devstral 2 and Devstral Small 2 - both focused on powering coding agents such as Mistral's newly released Mistral Vibe which I wrote about earlier today .

  • Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.
  • Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.

Devstral 2 is a 123B model released under a janky license - it's "modified MIT" where the modification is:

You are not authorized to exercise any rights under this license if the global consolidated monthly revenue of your company (or that of your employer) exceeds $20 million (or its equivalent in another currency) for the preceding month. This restriction in (b) applies to the Model and any derivatives, modifications, or combined works based on it, whether provided by Mistral AI or by a third party. [...]

Mistral Small 2 is under a proper Apache 2 license with no weird strings attached. It's a 24B model which is 51.6GB on Hugging Face and should quantize to significantly less.

I tried out the larger model via my llm-mistral plugin like this:

llm install llm-mistral
llm mistral refresh
llm -m mistral/devstral-2512 "Generate an SVG of a pelican riding a bicycle"

Bicycle looks a bit like a cybertruck

For a ~120B model that one is pretty good!

Here's the same prompt with -m mistral/labs-devstral-small-2512 for the API hosted version of Devstral Small 2:

A small white pelican on what looks more like a child's cart.

Again, a decent result given the small parameter size. For comparison, here's what I got for the 24B Mistral Small 3.2 earlier this year.

The 2024 Free Software Awards winners

Linux Weekly News
lwn.net
2025-12-09 23:55:40
The Free Software Foundation has announced the recipients of its 2024 (even though 2025 is almost over) Free Software Awards. Andy Wingo won the award for the advancement of free software, Alx Sa is the outstanding new free-software contributor, and Govdirectory takes the award for projects of soci...
Original Article

[Posted December 9, 2025 by corbet]

The Free Software Foundation has announced the recipients of its 2024 (even though 2025 is almost over) Free Software Awards. Andy Wingo won the award for the advancement of free software, Alx Sa is the outstanding new free-software contributor, and Govdirectory takes the award for projects of social benefit.


NYC congestion pricing cuts air pollution by 22% in six months

Hacker News
airqualitynews.com
2025-12-10 02:58:45
Comments...
Original Article

In its first six months, New York City’s controversial congestion pricing scheme has reduced air pollution by 22% in Manhattan’s toll zone, while improving air quality across the entire metropolitan region, according to new research.

The Cornell University study analysed data from 42 air quality monitors throughout the New York area between January 2024 and June 2025, tracking PM2.5 concentrations before and after the January 2025 launch of the Congestion Relief Zone (CRZ).

yellow cab on road during daytime
The findings provide the first rigorous evidence that charging drivers to enter Manhattan’s core delivers substantial public health benefits.

Within the CRZ, which covers Manhattan streets at or below 60th Street, average daily peak concentrations of PM2.5 dropped by 3.05 µg/m³. For context, background pollution levels in the region typically hover around 8-9 µg/m³, making this reduction particularly significant for public health.

Notably, the benefits were found to extend far beyond the toll zone itself. Across New York City’s five boroughs, pollution levels fell by an average of 1.07 µg/m³, while the broader metropolitan area saw reductions of 0.70 µg/m³. This refutes claims that congestion pricing merely pushes traffic and its associated pollution to neighboring communities.

The improvements grew stronger over time, suggesting drivers are increasingly adapting their behavior. In the CRZ’s first week, pollution reductions within the toll zone averaged just 0.8 µg/m³. By the 20th week, that figure had grown to 4.9 µg/m³, suggesting commuters were switching to public transit, rescheduling trips or finding alternative routes.

Indeed, traffic data supports this. Between January and June 2025, vehicle entries into the toll zone dropped approximately 11% overall, with heavy-duty truck traffic falling by 18% and passenger cars declining by 9%. The disproportionate reduction in truck traffic appears particularly important, as these vehicles contribute heavily to urban air pollution despite representing a smaller share of total traffic.

The results exceed outcomes from similar programs in European cities. Stockholm’s congestion pricing reduced air pollution by 5-15% over several years, while London’s Ultra Low Emission Zone achieved roughly a 7% citywide decline. The researchers suggest that New York’s comparatively larger impact reflects the city’s exceptional transit infrastructure and the high volume of discretionary trips that drivers can easily shift to subways and buses.

The findings arrive as other American cities, including San Francisco and Los Angeles, consider implementing their own congestion pricing systems. New York’s experience suggests such programs can deliver rapid environmental benefits while generating revenue for transit improvements – a dual outcome that urban planners have long sought but rarely achieved.

Senior author Oliver Gao said: ‘Our overall conclusion is that congestion pricing in New York City, like many other cities in the world that have implemented it, helped not only improve traffic, but also helped reduce air pollutant concentration, improve air quality and should be good for public health.’

The study’s co-lead author Timothy Fraser added: ‘It’s really exciting to me that air quality improved throughout the entire metro area. This tells us that congestion pricing didn’t simply relocate air pollution to the suburbs by rerouting traffic. Instead, folks are likely choosing cleaner transportation options altogether, like riding public transportation or scheduling deliveries at night. This thins traffic and limits how smog compounds when many cars are on the road.’

Photo : Franz Boccalatte / Unsplash

Making macOS Bearable

Hacker News
seg6.space
2025-12-10 02:37:34
Comments...
Original Article

Intro

Ideally, a computer system should feel like an extension of your body. When you pick up a cup of coffee, you don't consciously think, "I need to engage my bicep, extend my forearm, and grasp with my fingers." You just think "drink coffee," and your body complies.

I've spent the better part of eight years on various flavors of Arch Linux, and over that time I settled into a local minimum: a system configuration where I can enter a flow state, forget I'm using a computer at all, and just focus on the work. The machine disappears.

Recently, I started using macOS (my workplace issued me an M4 Pro MacBook, and I can't yet put Asahi Linux on it), and with this change, that neural link was severed. Stock macOS gives me something like motion sickness whenever I try to accomplish anything. There's just too much friction in Spaces, Mission Control, window management, all of it.

So I set out to fix this for myself.

The "Where's Waldo" Problem

Apple wants you to use Mission Control. They want you to swipe up with three fingers, see a scattered mosaic of every window you have open, and then use your eyes to scan for the one you want.

mission control

This is terrible!!!

Visual search is the most expensive cognitive task you can perform while focused on doing something. Every time you have to scan the screen to find a window, you are breaking context.

My hierarchy of navigation is as follows:

  1. Shortcuts: I know exactly where something is. I press a key, and I am there.
  2. Fuzzy Finding: I know what I want, but not where it is. I type three letters into Raycast, and it appears.
  3. Visual Search: This is the fallback I try to never use.

Encoding Location with Aerospace

The default macOS window model is "floating." Windows pile on top of each other, you drag them around manually, and Spaces lets you swipe between virtual desktops that have no enforced structure. It's flexible, but flexibility without constraints is just chaos.

To fix this, I use Aerospace. It's a tiling window manager that replaces the native "Spaces" concept with rigid, deterministic workspaces.

aerospace

Aerospace allows me to spatially encode my software. I don't need to "check" where Spotify is. Spotify is on Workspace 9. Always. My browser is on Workspace 1. My terminal is on Workspace 2.

[workspace-to-monitor-force-assignment]
7 = 'secondary'
8 = 'secondary'
9 = 'secondary'

[[on-window-detected]]
if.app-id = 'com.mitchellh.ghostty'
run = 'move-node-to-workspace 2'

This turns navigation into muscle memory. Cmd-2 is not "Switch to Terminal"; Cmd-2 is just the physical reflex of "I want to code." I don't look. I just hit the key combination, and the active workspace changes.

Development Workspace

Inside Workspace 2 lives Ghostty, running Tmux.

But standard Tmux keybinds are too clunky. The default Ctrl-b prefix doesn't spark joy to use. I use root bindings ( -n ) to bypass the prefix entirely where I see it fit.

I don't use panes; I use full windows as "views." Alt-1 switches to the first window. Alt-2 switches to the second. But here is the logic that makes it flow:

bind -n M-1 if-shell 'tmux select-window -t 1' '' 'new-window -t 1'

If window 1 doesn't exist, it creates it. I don't "manage" windows; I just go to where I want to be, and the system accommodates me.

To glue it all together, I wrote a custom Rust tool called ws .

ws session switcher in action

When I hit Alt-s , a fuzzy finder pops up over my current work. I pick a project, and ws instantly attaches to that session or spins up a new environment with my editor ( helix ) and file manager ( fx ) ready to go. It maintains a stack-based history, so I can jump to a project, fix a bug, and hit "Back" to return to exactly where I was.

The Language of Motion

Humans are incredibly good at language. We are hardwired for syntax, grammar, and structure. We are not hardwired to hunt for pixels on a glowing rectangle.

This is why I use modal editing. It stops text manipulation from being a manual labor task, e.g. dragging a mouse, holding backspace, and turns it into a conversation. If I want to change the text inside some quotes, I don't drag a cursor; I speak the command: ci" (change inside quotes). It is linguistic. I am speaking to the editor in a language we both understand.

The problem with modern OS design is that it abandons this linguistic efficiency for visual clutter.

Bypassing the Mouse

Of course, I still use the mouse. I’m not a zealot. But for 90% of web browsing, lifting my hand to the mouse is unnecessary friction.

I use Vimium in the browser.

vimium

When I want to click a link, I don't aim; I just look at it. Two letters appear over the link, I type them, and it clicks. It feels telepathic. I look at the element, and the element activates.

I recently added Homerow to the mix, which brings this same "look and type" navigation to the entire macOS UI. It allows me to click system dialogs or toolbar buttons without ever leaving the home row.


By layering Aerospace, Tmux, and modal editing, I’ve tried to replicate that "extension of the body" feeling. The goal isn't to be a "power user" for the sake of it. The goal is to remove the lag between thinking "I want to do X" and the computer actually doing it.

The dotfiles: https://github.com/seg6/dotfiles

Show all your application error using Cloudflare Error Page

Hacker News
github.com
2025-12-10 02:18:07
Comments...
Original Article

Cloudflare Error Page Generator

📢 Update (2025/12/09) : All icons used in the error page have been fully redrawn as vector assets. These icons along with the stylesheet are also inlined into a single file of the error page, eliminating any need of hosting additional resources and ensuring better experience for you and your end users.

What does this project do?

This project creates customized error pages that mimics the well-known Cloudflare error page. You can also embed it into your website.

Online Editor

Here's an online editor to create customized error pages. Try it out here .

Editor

(And thank @rynzland for the idea!)

Quickstart for Programmers

Python

Install cloudflare-error-page with pip.

pip install git+https://github.com/donlon/cloudflare-error-page.git

Then you can generate an error page with the render function. ( example.py )

import webbrowser
from cloudflare_error_page import render as render_cf_error_page

# This function renders an error page based on the input parameters
error_page = render_cf_error_page({
    # Browser status is ok
    'browser_status': {
        "status": 'ok',
    },
    # Cloudflare status is error
    'cloudflare_status': {
        "status": 'error',
        "status_text": 'Error',
    },
    # Host status is also ok
    'host_status': {
        "status": 'ok',
        "location": 'example.com',
    },
    # can be 'browser', 'cloudflare', or 'host'
    'error_source': 'cloudflare',

    # Texts shown in the bottom of the page
    'what_happened': '<p>There is an internal server error on Cloudflare\'s network.</p>',
    'what_can_i_do': '<p>Please try again in a few minutes.</p>',
})

with open('error.html', 'w') as f:
    f.write(error_page)

webbrowser.open('error.html')

Default error page

You can also see live demo here .

A demo server using Flask is also available in flask_demo.py .

Node.js

PHP

More Examples

Catastrophic infrastructure failure

params = {
    "title": "Catastrophic infrastructure failure",
    "more_information": {
        "for": "no information",
    },
    "browser_status": {
        "status": "error",
        "status_text": "Out of Memory",
    },
    "cloudflare_status": {
        "status": "error",
        "location": "Everywhere",
        "status_text": "Error",
    },
    "host_status": {
        "status": "error",
        "location": "example.com",
        "status_text": "On Fire",
    },
    "error_source": "cloudflare",
    "what_happened": "<p>There is a catastrophic failure.</p>",
    "what_can_i_do": "<p>Please try again in a few years.</p>",
}

Catastrophic infrastructure failure

Demo

Web server is working

params = {
    "title": "Web server is working",
    "error_code": 200,
    "more_information": {
        "hidden": True,
    },
    "browser_status": {
        "status": "ok",
        "status_text": "Seems Working",
    },
    "cloudflare_status": {
        "status": "ok",
        "status_text": "Often Working",
    },
    "host_status": {
        "status": "ok",
        "location": "example.com",
        "status_text": "Almost Working",
    },
    "error_source": "host",
    "what_happened": "<p>This site is still working. And it looks great.</p>",
    "what_can_i_do": "<p>Visit the site before it crashes someday.</p>",
}

Web server is working

Demo

FAQ

How to show real user IP / Cloudflare Ray ID / data center location in the error page so that it looks more realistic?

Ray ID and user IP field in the error page can be set by ray_id and client_ip properties in the params argument passed to the render function. The real Cloudflare Ray ID and the data center location of current request can be extracted from the Cf-Ray request header (e.g. Cf-Ray: 230b030023ae2822-SJC ). Detailed description of this header can be found in Cloudflare documentation .

To lookup the city name of the data center corresponding to the three letter code in the header, you can use a location list from here

The demo server runs in our website did handle these. Take a look at this file for reference.

See also

Full Parameter Reference

{
    "html_title": "cloudflare.com | 500: Internal server error",
    "title": "Internal server error",
    "error_code": 500,
    "time": "2025-11-18 12:34:56 UTC",  // if not set, current UTC time is shown

    // Configuration for "Visit ... for more information" line
    "more_information": {
        "hidden": false,
        "text": "cloudflare.com", 
        "link": "https://www.cloudflare.com/",
        "for": "more information",
    },

    // Configuration for the Browser/Cloudflare/Host status
    "browser_status": {
        "status": "ok", // "ok" or "error"
        "location": "You",
        "name": "Browser",
        "status_text": "Working",
        "status_text_color": "#9bca3e",
    },
    "cloudflare_status": {
        "status": "error",
        "location": "Cloud",
        "name": "Cloudflare",
        "status_text": "Error",
        "status_text_color": "#bd2426",
    },
    "host_status": {
        "status": "ok",
        "location": "The Site",
        "name": "Host",
        "status_text": "Working",
        "status_text_color": "#9bca3e",
    },
    "error_source": "host", // Position of the error indicator, can be "browser", "cloudflare", or "host"

    "what_happened": "<p>There is an internal server error on Cloudflare's network.</p>",
    "what_can_i_do": "<p>Please try again in a few minutes.</p>",

    "ray_id": '0123456789abcdef',  // if not set, random hex string is shown
    "client_ip": '1.1.1.1',

    // Configuration for 'Performance & security by ...' in the footer
    "perf_sec_by": {
        "text": "Cloudflare",
        "link": "https://www.cloudflare.com/",
    },
}

The AI-Education Death Spiral a.k.a. Let the Kids Cheat

Hacker News
anandsanwal.me
2025-12-10 01:36:56
Comments...
Original Article
AI education death spiral

This essay first appeared in my newsletter. Sign up here if interested in Unf^cking Education .


Your kid didn’t write their essay last night.

ChatGPT did.

And that might be the most honest thing happening in school today.

They’re copying essays from AI, running them through “humanizing” tools, and handing in work they’ve barely read. They’re having AI listen to lectures so they don’t have to. They’re sneaking AI via their mobile phones into tests.

They’re using ChatGPT for everything from math homework to history essays to college applications.

And they should be.

To be clear, I’m not advocating for AI in real learning. AI is only useful right now as a stress test as it reveals how hollow adolescent work has become. If it pushes schools toward offering work with relevance, impact, and agency and away from hopeless busywork (“When will I ever use this?”), that is a win.


Because AI isn’t the problem.

It’s just a light revealing how fake and pointless school has become.

The Death Spiral Has Already Begun

Walk into any high school classroom. A majority of the work is written by AI.

Everyone knows. Most say nothing.

Teachers pretend to grade.

Students pretend to write.

It’s as much about learning as taking your shoes off at the airport is about security.

Teachers and professors acknowledge it is rampant, but there is little they can do as evidenced by this post.

The author of this post ended it with this humorous conclusion.

So yeah. ChatGPT is my best student now. It hands in perfect work, never complains, and never asks for an extension. And the worst part? I think I like it better.

And as highlighted above, this is “every single paper”, i.e., this isn’t a few bad apples.

Parents who found their daughter cheating on multiple assignments heard:

“Everyone is doing this” and that it’s the only way to stay competitive.

McCabe’s research confirms this: once cheating becomes normalized and the system loses legitimacy, defection becomes the dominant strategy.

This is the classic prisoner’s dilemma.

  • If everyone plays fair, all benefit.
  • But if others cheat and you don’t, you fall behind.
  • So even the “good” students feel forced to cheat just to stay even.

This, however, isn’t a moral collapse.

It’s a design failure.

The real revelation?

AI exposed that a lot of school work isn’t worth the effort.

Maria Montessori said it a century ago:

“The work must be something the child feels is worth doing.”

Schools forgot and flipped that.

They assign work and expect kids to value it merely because it was assigned.

The Predictable Crackdown

Some schools and teachers unhappy with the theater chose not to look the other way and responded exactly as you’d expect.

First came the guilt: “You’re only cheating yourself.”

When that inevitably didn’t work, they escalated to AI detectors that don’t work, forced handwritten essays, laptop bans, surveillance tools.

They made classrooms, places where you’re already told to sit still and do as you’re told, even more prison-like . Not surprisingly, this same strategy is being used at universities as Princeton University professor D. Graham Burnett reveals in this response on the Hard Fork podcast who states:

We’re like the sheriffs. and so the concern is all my assignments are now useless. I can’t assign papers. Am I going to have do a blue book exam?

Their strategies, as you can see, are almost all punitive.

As one lecturer inspiringly put it :

“Catch what you can, no mercy for the damned.”

And then they wonder why students check out even more .

Here’s what they never admit: AI didn’t create the problem. It just revealed it.

The Coming Collapse

Follow the money.

What happens when a 4.0 GPA means nothing because half the work was done by AI?

We’ve seen this before.

During COVID, when school went virtual, parents saw what was really going on.

The result?

Public school enrollment dropped by 1.3 million. States like Oregon and New York lost over 5% of their students.

And it will similarly accelerate when parents realize they’re paying (via taxes or tuition) for education theater and their students are actually learning very little.

Colleges will then quietly start ignoring GPAs .

Employers will stop trusting transcripts.

And when everyone acknowledges that the product is worthless, the economic foundation collapses.

What Survives the Stress Test?

AI is a filter.

It strips away everything that can be automated, leaving only what requires actual thinking: creativity, collaboration, real-world problem-solving.

Deci & Ryan’s research says people engage when they have autonomy, competence, and purpose.

School as we’ve constructed it for hundreds of years kills all three .

But some are adapting.

  • At High Tech High, students tackle real community problems.
  • At Forney ISD in Texas , students run actual businesses inside their school.
  • At the School of Entrepreneuring , students identify and solve real problems on behalf of others while working together.

Boalar’s research confirms this: when work is relevant and challenging, cheating drops dramatically.

Not because it’s harder, but because students actually want to do the work.

We need to move to education that prioritizes engagement (note: school need not be easy or fun. It requires productive struggle )

Let It Burn

AI cheating highlights that much of what passes for education today has no value.

So let AI burn down and reveal how inane this work is.

Let it break the model so we can finally build something better.

Because the students have already figured it out.

The next time a teacher complains about AI cheating, ask: If a machine can do this assignment perfectly, why are you giving it to this student?And then we can replace it with education and work that actually matters.

224× Compression of Llama-70B with Higher Accuracy (Paper and Code)

Hacker News
zenodo.org
2025-12-10 01:25:00
Comments...
Original Article

Published December 10, 2025 | Version v1

Preprint Open

  • 1. Anima Core Inc
  • 2. Shamim Institute of Soul Systems

Description

This paper introduces the first verified method to eliminate transformers from inference while preserving, and in many cases improving, downstream accuracy.

We show that a frozen 70-billion-parameter Llama-3.3-70B model can be replaced by a 256-dimensional meaning field extracted from seven internal activation layers. A lightweight compressor (AN1) reduces these fields by 224× with an average +1.81 percentage point gain across classification tasks, including +3.25 pp on low-resource RTE (R² = 0.98 inverse-scaling fit, p < 0.01). A 30M-parameter student then learns to regenerate these fields directly from raw text, enabling full transformer-free inference at 60× higher throughput with only 0.35 pp average accuracy loss.

The core insight is that task-aligned semantics in modern transformers occupy a remarkably low-rank manifold. Across layers we observe 72–99 percent of variance in the top one to three dimensions. Once this structure is extracted and learned, the transformer becomes unnecessary. It serves as a one-time sculptor of meaning rather than the permanent home of inference.

This work establishes Field Processing Units (FPUs) as a post-transformer compute primitive that replaces deep matrix multiplication with shallow field operations.

All results are averaged over five seeds with statistical significance reported. Ablations isolate the causal contributions of field supervision, geometric regularization, and anchor-layer selection.

This Zenodo release provides the complete scientific manuscript and the baseline reference implementation for the AN1 Core system. Proprietary optimizations (AN1-Turbo) have been removed to support independent verification and further research into post-transformer inference.

Files

Post-Transformer_Inference.pdf

Files (1.6 MB)

Additional details

Rubio Deletes Calibri as the State Department's Official Typeface

Hacker News
www.nytimes.com
2025-12-10 00:59:05
Comments...
Original Article

Please enable JS and disable any ad blocker

Fear of the Walking Zig: The Security Audit Gap

Lobsters
generativeai.pub
2025-12-10 00:41:26
Comments...

Rubio stages font coup: Times New Roman ousts Calibri

Hacker News
www.reuters.com
2025-12-10 00:08:34
Comments...
Original Article

Please enable JS and disable any ad blocker

US International Trade Administration Shaped EU Censorship Against US Companies

Hacker News
foundationforfreedomonline.com
2025-12-09 23:53:20
Comments...
Original Article

SUMMARY

  • X became the first American platform to be fined under the EU’s Digital Services Act, receiving a €120 million penalty after allegedly refusing to open up its data to “disinformation researchers.”
  • Disinformation researchers are critical to online censorship – they are the ones who compile databases of disfavored speech and the advertisers that fund it.
  • Without access to online platforms’ data, the international censorship machine is blind.
  • The EU’s Digital Services Act and its provisions mandating data access for researchers emerged with the full support and cooperation of the US government under the previous administration.
  • 23 US-funded “counter-disinformation” organizations are involved in the EU’s censorship regime, representing $15,444,695 in US taxpayer funding. Many of these organizations will receive access to X’s data if EU bureaucrats successfully pressure the platform.
  • Documents reviewed by FFO also expose the central role of the US Trade Representative and the US International Trade Administration at the Department of Commerce.
  • Both collaborated with the EU under the previous administration, through the US-EU Trade and Technology Council, which developed a shared list of policy priorities that were later enshrined in the Digital Services Act.
  • These include the DSA’s provisions on data access for researchers that are now being used to target X.

The European Commission announced on December 4 that it will fine Elon Musk’s X €120 million (approx. $140 million) for non-compliance with the Digital Services Act (DSA), the EU’s draconian online censorship regime . The Commission has given X 60 days to provide it with a compliance plan, at the risk of “periodic penalty payments.”

X was the first platform to be investigated under the DSA. Immediately upon the censorship law going into force last year, the EU’s ruling commission of unelected bureaucrats used its new powers to investigate the platform Musk, after months of saber-rattling from European officials that began shortly after Musk’s takeover of the company and concurrent promise to roll back its censorship regime.

The €120 million fine is not the end of the matter. If X ignores the Commission’s demands, the EU can impose periodic penalties up to 5% of a company’s average daily worldwide turnover for each day of continued non-compliance.

While this censorship attack appears to be coming solely from Europe, the real picture is more complicated. The Digital Services Act did not emerge in a vacuum – it was the product of a transatlantic US-EU censorship partnership that reached its apex under the Biden administration.

This partnership, run out of the International Trade Administration at the Department of Commerce, and the office of the U.S. Trade Representative at the White House , developed a shared US-EU strategy on containing “harmful” online content.

This strategy included forcing tech platforms to open themselves up to “disinformation researchers” who can identify both disfavored online speech and online advertisers for potential boycott operations. Failure to allow these (often state-funded) “researchers” free access to their data is precisely why the EU has fined X.

In more ways than one, the European Commission is acting as a government-in-exile for the US federal censorship state that enjoyed the support of the previous administration, only to be defunded and expunged by the current one .

Transatlantic Censorship: The US-EU Trade and Technology Council

The roots of the EU’s censorship campaign against X can be found in the US-EU Trade and Technology Council, a four-year liaison between the architects of censorship on two continents that began in the first year of the Biden administration.

In its inaugural joint statement , the US and EU announced 10 “working groups” devoted to a wide range of digital policy. One of these, Working Group 5, specifically sought to develop “shared approaches” to censoring disfavored online content.

From the inaugural statement:

Working Group 5 – Data Governance and Technology Platforms: The Data Governance and Technology Platforms working group is tasked to exchange information on our respective approaches to data governance and technology platform governance, seeking consistency and interoperability where feasible. We intend to exchange information and views regarding current and future regulations in both the United States and European Union with a goal of effectively addressing shared concerns , while respecting the full regulatory autonomy of the United States and European Union. We have identified common issues of concern around: illegal and harmful content and their algorithmic amplification , transparency, and a ccess to platforms’ data for researchers as well as the democratic responsibility of online intermediaries. We have also identified a shared interest in using voluntary and multi-stakeholder initiatives to complement regulatory approaches in some areas. We are committed to transatlantic cooperation regarding platform policies that focus on disinformation , product safety, counterfeit products, and other harmful content. We plan to engage with platform companies to improve researchers’ access to data generated by platforms, in order to better understand and be able to address systemic risks linked to how content spreads online .  We also plan to engage in a discussion on effective measures to appropriately address the power of online platforms and ensure effective competition and contestable markets. The working group is also tasked to discuss, alongside other working groups, common approaches on the role of cloud infrastructure and services.

In addition to stating outright that the previous administration “shared concerns” with the European Union about the spread of disfavored online speech, the inaugural statement specifically highlights the importance of making sure “researchers” can access platforms’ data.

This is a critical point. “Disinformation researchers” are the eyes and ears of the global censorship regime. They are the ones who compile lists of disfavored users, posts, and online narratives that are then passed on to content moderation departments for censorship.

Without access to data from social media platforms at scale, disinformation researchers – and the global censorship machine that relies on them – are blind.

Guaranteeing this access was so important to the US-EU Trade and Technology Council that in each of the years it published reports, it was mentioned as a priority.

In 2022, the US-EU body reiterated that “researchers” are essential to understanding online risks, “particularly those related to illegal content and harmful content,” and expressed concern that data access was dependent on “voluntary” mechanisms established by tech platforms:

In 2023, the US-EU council gave the issue of data access for researchers equal billing to the protection of children as a policy priority:

In the same publication, the US and EU expressed dissatisfaction with the fact that tech platforms were not bound by law to open up their data to the “disinformation researchers” of the censorship industry.

The joint statement specifically connects this to priorities like “information integrity” and “election integrity” – the pretexts used to censor virtually every prominent Trump supporter ahead of the 2020 election, as well as President Trump himself, in 2021.

The statement also said that data access for researchers was critical to analyzing “disproportionate impacts on vulnerable, marginalized, or underrepresented communities.” This is the hate speech pretext — another common bureaucratic justification to shut down political speech online. And the United States under the previous administration, despite the First Amendment, gave its full support to the EU’s approach.

And finally, in 2024 – an entire standalone report devoted to the topic, titled “Mechanisms for Researcher Access to Online Platform Data.” The report compared the various methods of data access from the platforms, and noted (with US approval) that the Digital Services Act mandated access to both data and ad repositories.

This is a critical point, because DSA Articles 40.12 and 39 are exactly the provisions that the European Commission has accused X of violating. The previous administration directly supported the same provisions that led to a €120 million fine against an American company.

From the European Commission’s announcement of the €120M fine against X:

The current administration may have ended domestic government support for censorship with Executive Order 14149 , but European bureaucrats are still dutifully carrying out the priorities of the transatlantic censorship regime the order sought to abolish.

The “Civil Society” Swarm

The direct censorship collaboration between the US and EU via the previous administration’s Trade and Technology Council exposes a deep level of transatlantic coordination in the establishment of the DSA.

But there are also a great deal of indirect links. As FFO previously revealed , 23 US-funded organizations – a mixture of NGOs, university research departments, and private companies – are involved in the EU’s censorship regime.

Some were signatories to the EU’s code of practice on disinformation, while others are directly enforcing the DSA through participation in the EU’s network of “digital observatories,” which monitor online speech at scale. These are the same digital observatories that the EU hopes to force X to open itself up to.

The list of US-funded participants in the “digital observatories” is as follows:

Newsguard

The infamous private blacklisting service received $750,000 from the Department of Defense in 2021 . Its close connections to the old foreign policy blob don’t end there — until December of last year, its public list of advisors (now scrubbed from the website) included former NSA and CIA director Michael Hayden, former NATO head Anders Fogh Rasmussen, former DHS secretary Tom Ridge, and former State Department undersecretary Richard Stengel.

Bellingcat

Widely praised by the US intelligence community, Bellingcat is a journalism and “open source intelligence” outfit that aims to provide actionable intelligence in a public way. The organization has received over $115,000 in funding from NED.

The University of Taru, Estonia

Received a $400,000 award from the US Department of State for an “advanced study program in combating disinformation to improve democratic resilience.”

Vytautas Magnus University, Lithuania

Received $10,250 from the U.S. Department of State to host a “global media and information literacy week.”

Funky Citizens, Romania

A Romanian “anti-disinformation” and “civic fitness” nonprofit that received $161,822 across five separate grants from the State Department , including to “strengthen the communication strategies of Romanian NGOs ahead of the 2024 election.” The results of that election were nullified by Romanian courts – apparently with EU support, as boasted by former EU commissioner Thierry Breton . The pretext for the nullification, which prevented the accession of a right-wing populist government, was an alleged Russian social media campaign.

Fundatia Centrul / Centrul Pentru Jurnalism Independent, Moldova

This journalism NGO in Moldova received $918,709 from the State Department across seven grants , more than $500,000 of which was concentrated in two grants in 2022 and 2023. These include grants to fund local journalists and  “social media monitoring.”

Seesame PR, Slovakia

This Slovakia-based PR firm was paid $149,924 by the State Department across two grants , including $125,000 for a PR campaign to “strengthen trust in freedom and democracy” in 2019.

Vrije University, Belgium

Vrije has received over $1.5 million in grants and awards from the U.S. government , including the US Department of Defense, the US Department of State, and the Environmental Protection Agency. While most of this funding is for projects unrelated to censorship, one $50,000 grant from the State Department was for “empowering youth” in Belgium, Finland, and Turkey to “counter disinformation.”

Cyprus University of Technology

The Cyprus University of Technology has received $316,765 from the State Department , including a number of grants for “misinformation” and “media literacy” research.

EU DisinfoLab, Belgium

Purpose-built for combating “disinformation,” this NGO received $15,000 from the State Department to implement its project in Latvia.

Verificat, Spain

A “fact checker” aimed at combating disinformation in Spain’s Catalonia region, Verificat received $11,000 from the State Department to host a workshop on disinformation.

NewsWhip

A social media analytics company, NewsWhip received part of a $317,268 award (that could pay up to $866,919 by its end date in 2028) from the State Department for a variety of electronic services.

Austria Presse Agentur (APA)

The national news agency of Austria, APA has received $305,874 in subscription revenues from the State Department since 2016.

Agence France Presse (AFP)

Based in France, AFP is the world’s oldest news agency, with over $300 million in annual revenues and the 27th-most visited news site in the world. AFP received $9,914,902 from the U.S. government , mainly in the form of subscriptions. The bulk of this ($9.14m) came from the U.S. Agency for Global Media. It also received $351,592 from the Department of Defense, $150,808 from the Department of State, and $279,255 from USAID. Of all the organizations involved in the EDMO disinformation-monitoring hubs, AFP has the most involvement, acting as an observer in eight of the fourteen “digital media observatories.”

The Open Society European Policy Institute

As citizen journalists on X have uncovered , George Soros’ influence operation in Europe, the Open Society European Policy Institute, has held multiple meetings with EU officials to discuss “disinformation,” including meetings that directly address the Digital Services Act.

From the European Commission’s public register of meetings :

The deep, decades-long collaboration between George Soros and the US foreign policy state is well-documented , including direct coordination with USAID in the 1990s to train emerging political leaders in central and eastern Europe.

The picture is clear: the European Union is not acting alone, and the Digital Services Act is not a purely EU creation. Until the current administration, it had the full support of the permanent bureaucracy in Washington DC, as well as the global network of US taxpayer funded “civil society” organizations that now hope to use the DSA as a battering ram to gain access to X’s data.

The previous administration may be gone, but its censorship machine lives on in the EU.

Under the hood of Canada Spends with Brendan Samek

Simon Willison
simonwillison.net
2025-12-09 23:52:05
I talked to Brendan Samek about Canada Spends, a project from Build Canada that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, sqlite-utils and pieces of LLM-powered PDF extraction. Here's the video...
Original Article

9th December 2025

I talked to Brendan Samek about Canada Spends , a project from Build Canada that makes Canadian government financial data accessible and explorable using a combination of Datasette, a neat custom frontend, Ruby ingestion scripts, sqlite-utils and pieces of LLM-powered PDF extraction.

Here’s the video on YouTube .

Sections within that video:

  • 02:57 Data sources and the PDF problem
  • 05:51 Crowdsourcing financial data across Canada
  • 07:27 Datasette demo: Search and facets
  • 12:33 Behind the scenes: Ingestion code
  • 17:24 Data quality horror stories
  • 20:46 Using Gemini to extract PDF data
  • 25:24 Why SQLite is perfect for data distribution

Build Canada and Canada Spends

Build Canada is a volunteer-driven non-profit that launched in February 2025—here’s some background information on the organization, which has a strong pro-entrepreneurship and pro-technology angle.

Canada Spends is their project to make Canadian government financial data more accessible and explorable. It includes a tax sources and sinks visualizer and a searchable database of government contracts, plus a collection of tools covering financial data from different levels of government.

Datasette for data exploration

The project maintains a Datasette instance at api.canadasbilding.com containing the data they have gathered and processed from multiple data sources—currently more than 2 million rows plus a combined search index across a denormalized copy of that data.

  Datasette UI for a canada-spends database.  aggregated-contracts-under-10k:  year, contract_goods_number_of, contracts_goods_original_value, contracts_goods_amendment_value, contract_service_number_of, contracts_service_original_value, contracts_service_amendment_value, contract_construction_number_of, contracts_construction_original_value, contracts_construction_amendment_value, acquisition_card_transactions_number_of, acquisition_card_transactions_total_value, owner_org, owner_org_title  487 rows cihr_grants  external_id, title, project_lead_name, co_researchers, institution, province, country, competition_year, award_amount, program, program_type, theme, research_subject, keywords, abstract, duration, source_url  53,420 rows contracts-over-10k:   reference_number, procurement_id, vendor_name, vendor_postal_code, buyer_name, contract_date, economic_object_code, description_en, description_fr, contract_period_start, delivery_date, contract_value, original_value, amendment_value, comments_en, comments_fr, additional_comments_en, additional_comments_fr, agreement_type_code, trade_agreement, land_claims, commodity_type, commodity_code, country_of_vendor, solicitation_procedure, limited_tendering_reason, trade_agreement_exceptions, indigenous_business, indigenous_business_excluding_psib, intellectual_property, potential_commercial_exploitation, former_public_servant, contracting_entity, standing_offer_number, instrument_type, ministers_office, number_of_bids, article_6_exceptions, award_criteria, socioeconomic_indicator, reporting_period, owner_org, owner_org_title  1,172,575 rows global_affairs_grants:   id, projectNumber, dateModified, title, description, status, start, end, countries, executingAgencyPartner, DACSectors, maximumContribution, ContributingOrganization, expectedResults, resultsAchieved, aidType, collaborationType, financeType, flowType, reportingOrganisation, programName, selectionMechanism, policyMarkers, regions, alternameImPositions, budgets, Locations, otherIdentifiers, participatingOrgs, programDataStructure, relatedActivities, transactions  2,378 rows nserc_grants:   title, award_summary, application_id, competition_year, fiscal_year, project_lead_name, institution, department, province, award_amount, installment, program, selection_committee, research_subject, area_of_application, co-researchers, partners, external_id, source_url  701,310 rows sshrc_grants:   id, title, program, fiscal_year, competition_year, applicant, organization, amount, discipline, area_of_research, co_applicant, keywords, source_url  213,085 rows transfers:   FSCL_YR, MINC, MINE, MINF, DepartmentNumber-Numéro-de-Ministère, DEPT_EN_DESC, DEPT_FR_DESC, RCPNT_CLS_EN_DESC, RCPNT_CLS_FR_DESC, RCPNT_NML_EN_DESC, RCPNT_NML_FR_DESC, CTY_EN_NM, CTY_FR_NM, PROVTER_EN, PROVTER_FR, CNTRY_EN_NM, CNTRY_FR_NM, TOT_CY_XPND_AMT, AGRG_PYMT_AMT  357,797 rows  Download SQLite DB: canada-spends.db 2.4 GB Powered by Datasette · Queries took 24.733ms

Processing PDFs

The highest quality government financial data comes from the audited financial statements that every Canadian government department is required to publish. As is so often the case with government data, these are usually published as PDFs.

Brendan has been using Gemini to help extract data from those PDFs. Since this is accounting data the numbers can be summed and cross-checked to help validate the LLM didn’t make any obvious mistakes.

Further reading

MS-13 and Trump Backed the Same Presidential Candidate in Honduras

Intercept
theintercept.com
2025-12-09 23:44:21
MS-13 gang members told Hondurans to vote for the Trump-backed right-wing candidate or “we’ll kill you and your whole fucking family.” The post MS-13 and Trump Backed the Same Presidential Candidate in Honduras appeared first on The Intercept....
Original Article

Gangsters from MS-13, a Trump-designated Foreign Terrorist Organization , intimidated Hondurans not to vote for the left-leaning presidential candidate, 10 eyewitness sources told The Intercept, in most cases urging them to instead cast their ballots in last Sunday’s election for the right-wing National Party candidate — the same candidate endorsed by U.S. President Donald Trump.

Ten residents from four working-class neighborhoods controlled by MS-13, including volunteer election workers and local journalists, told The Intercept they saw firsthand gang members giving residents an ultimatum to vote for the Trump-endorsed conservative candidate or face consequences. Six other sources with knowledge of the intimidation — including government officials, human rights investigators, and people with direct personal contact with gangs — corroborated their testimony. Gang members drove voters to the polls in MS-13-controlled mototaxi businesses, three sources said, and threatened to kill street-level activists for the left-leaning Liberty and Refoundation, or LIBRE, party if they were seen bringing supporters to the polls. Two witnesses told The Intercept they saw members of MS-13 checking people’s ballots inside polling sites, as did a caller to the national emergency help line.

“A lot of people for LIBRE didn’t go to vote because the gangsters had threatened to kill them,” a resident of San Pedro Sula, the second-largest city in Honduras, told The Intercept. Mareros , as the gang members are known, intimidated voters into casting their ballots for Nasry “Tito” Asfura, known as Papi a la Órden or “Daddy at your service.” Multiple residents of San Pedro Sula alleged they were also directed to vote for a mayoral candidate from the centrist Liberal Party.

Miroslava Cerpas, the leader of the Honduran national emergency call system, provided The Intercept with four audio files of 911 calls in which callers reported that gang members had threatened to murder residents if they voted for LIBRE. A lead investigator for an internationally recognized Honduran human rights NGO, who spoke anonymously with The Intercept to disclose sensitive information from a soon-to-be published report on the election, said they are investigating gang intimidation in Tegucigalpa and the Sula Valley “based on direct contact with victims of threats by gangs.”

“If you don’t follow the order, we’re going to kill your families, even your dogs. We don’t want absolutely anyone to vote for LIBRE.”

“People linked to MS-13 were working to take people to the voting stations to vote for Asfura, telling them if they didn’t vote, there would be consequences,” the investigator told The Intercept. They said they received six complaints from three colonias in the capital of Tegucigalpa and three in the Sula Valley, where voters said members of MS-13 had threatened to kill those who openly voted for the ruling left LIBRE party or brought party representatives to the polls. The three people in the Sula Valley, the investigator said, received an audio file on WhatsApp in which a voice warns that those who vote for LIBRE “have three days to leave the area,” and “If you don’t follow the order, we’re going to kill your families, even your dogs. We don’t want absolutely anyone to vote for LIBRE. We’re going to be sending people to monitor who is going to vote and who followed the order. Whoever tries to challenge the order, you know what will happen.”

The MS-13 interference took place as the U.S. president, who has obsessed over the gang since his first term , extended an interventionist hand over the elections. On November 28, Trump threatened to cut off aid to Honduras if voters didn’t elect Asfura while simultaneously announcing a pardon for Asfura’s ally and fellow party member Juan Orlando Hernández, the former president of Honduras convicted in the U.S. on drug trafficking and weapons charges last year.

“If Tito Asfura wins for President of Honduras, because the United States has so much confidence in him, his Policies, and what he will do for the Great People of Honduras, we will be very supportive,” Trump wrote on Truth Social. “If he doesn’t win, the United States will not be throwing good money after bad, because a wrong Leader can only bring catastrophic results to a country, no matter which country it is.”

The election remains undecided over a week after the fact: Asfura holds a narrow lead over centrist Liberal Party candidate Salvador Nasralla, while Rixi Moncada, the LIBRE party candidate, remains in a distant third. As people await the final results, one San Pedro Sula resident said, “there’s been a tense calm.”

It’s unlikely the MS-13 interference led to LIBRE’s loss, since the ruling party had already suffered a significant drop in popularity after a lack of change, continued violence, and corruption scandals under four years of President Xiomara Castro. But the LIBRE government pointed to a raft of other electoral irregularities, and a preliminary European Union electoral mission report recognized that the election was carried out amid “intimidation, defamation campaigns, institutional weakness, and disinformation,” though it ignored LIBRE’s accusations of “fraud.” The Honduran attorney general announced their own investigation into irregularities in the election last week, and on Monday, two representatives for the National Electoral Council informed Hondurans that the electronic voting system wasn’t updated for over 48 hours over the weekend, while results are still being finalized.

“There is clear and resounding evidence that this electoral process was coerced by organized crime groups,” said Cerpas, who is a member of the LIBRE party, “pushing the people to vote for Nasry Asfura and intimidating anyone who wanted to vote for Rixi Moncada.”

“There is clear and resounding evidence that this electoral process was coerced by organized crime groups.”

Gerardo Torres, the vice chancellor of foreign relations for the LIBRE government, told The Intercept via phone that manipulation of elections by maras is a well-established practice — but that the timing of the threats was alarming given Trump’s simultaneous pardoning of Hernández and endorsement of Asfura. “When, a day before the elections, the president of the United States announces the liberation of Hernández, and then automatically there is a surge in activity and intimidation by MS-13,” Torres said, it suggests that the gang members see the return of the former president as “an opportunity to change their situation and launch a coordinated offensive.”

“It would seem like the U.S. is favoring, for ideological reasons, a narco-state to prevent the left from returning to power,” he said.

The White House, Asfura, and the National Party did not respond to The Intercept’s requests for comment.

All witnesses who alleged election interference have been granted anonymity to protect them from targeting by MS-13.

“They Control These Colonias”

Bumping over potholed dirt roads on the outskirts of San Pedro Sula the day before the presidential election, a motorcycle taxi driver informed their passenger of MS-13’s latest ultimatum: The mototaxis “were strictly prohibited from bringing people from LIBRE to the voting stations on election day,” recalled the passenger. “Only people for the National Party or the Liberal Party — but for LIBRE, no one, no one, not even flags were allowed.”

Gangs like MS-13 “control the whole area of Cortés,” the passenger said, referring to their home department. “Total subjugation.”

The gang members closely monitor the movements of those within their territories, in many cases by co-opting or controlling mototaxi services to keep track of who comes and goes. Three other sources in San Pedro Sula and one in Tegucigalpa confirmed MS-13’s co-optation of mototaxis in the area; another source with direct, yearslong contact with gang members on the north coast of Honduras confirmed that MS-13 was pushing residents in their territories of San Pedro Sula to vote for Asfura by the same means. When members of MS-13 passed through Cortés warning that those who voted for LIBRE “had three days to leave,” the mototaxi passenger said, residents surrounded by years of killings, massacres, and disappearances by the gang knew what might await them if they defied.

MS-13 was formed in the 1980s in Los Angeles, California, among refugees of the Salvadoran civil war who the George H.W. Bush administration then deported en masse to Central America. In the ’90s, local gangs of displaced urban Hondurans morphed with the Salvadoran franchise. Over the years, the Mara Salvatrucha, which MS stands for, evolved into a sophisticated criminal enterprise: first as street-level drug dealers, then extortionists, assassins for hire, and cocaine transporters who have been documented working in league with high-level traffickers and state officials for at least two decades.

If Honduras has been a home turf of gangs, the country is also an anchor for U.S. power in the region, hosting the second-largest U.S. military base in Latin America and a laboratory for radical experiments in libertarian far-right “private cities.” In 2009, the Honduran military carried out a coup under the passive watch of U.S. authorities, ousting then-President Manuel Zelaya , a centrist and husband of current President Xiomara Castro. The homicide rate skyrocketed, turning the country into the world’s most violent, per U.S. State Department rankings, by the 2010s.

The chaos gave rise to ex-president Hernández, whom U.S. prosecutors later accused of turning Honduras into a “cocaine superhighway” as he directed the country’s military, police, and judiciary to protect drug traffickers. Last week, Hernández was released from a West Virginia prison after a pardon from Trump, and on Monday, the Honduran attorney general announced an international warrant for his arrest.

“Gangsters were going from house to house to tell people to vote for Papi.”

As Honduran voters processed the latest cycle of U.S. influence over their politics, the more immediate menace at the polls extended to the local level. “Gangsters were going from house to house to tell people to vote for Papi [Asfura] and el Pollo ,” said a San Pedro Sula resident who volunteered at a voting booth on election day, referring to the city’s mayor, Roberto Contreras of the Liberal Party. Two other sources in the city, and one government source in Tegucigalpa, also said gang members were backing Contreras.

“The team of Mayor Roberto Contreras categorically rejects any insinuation of pacts with criminal structures,” said a representative for the mayor in a statement to The Intercept. “Any narrative that tries to tie [support for Contreras] with Maras or gangs lacks base, and looks to distract attention from the principal message: the population went to vote freely, without pressure and with the hope of a better future.”

Gang intimidation of voters isn’t new in Honduras, where, within territories zealously guarded and warred over by heavily armed gangs, even the threat for residents to vote for certain candidates is enough to steer an election in their district. “Remember that they control these colonias,” said one of the San Pedro Sula residents. “And given the fact that they have a lot of presence, they tell the people that they’re going to vote for so-and-so, and the majority follow the orders.”

The human rights lawyer Victor Fernández, who ran for mayor of San Pedro Sula as an independent candidate but lost in the March primaries, said he and his supporters also experienced intimidation from MS-13 during his primary campaign. After his own race was over, he said he continued to see indications of gang intervention in the presidential campaign for months leading up to election day.

“Both before and during the elections on November 30, gangsters operating here in the Sula Valley exercised their pressure over the election,” he said, explaining this conclusion was drawn from “recurring” testimonies with residents of multiple neighborhoods. “The great violent proposal that people have confirmed is that gang members told them they couldn’t go vote for LIBRE, and that whoever did so would have to confront [the gang] structure.”

“Vamos a votar por Papi a la Órden”

Minutes after submitting a highly publicized complaint to the Public Ministry on Monday, Cerpas, of the National Emergency call system, told The Intercept that her office received 892 verified complaints of electoral violations on election day. “In those calls,” she said, “there was a significant group of reports regarding intimidation and threats by criminal groups.”

Four audio recordings of residents calling the emergency hotline, which Cerpas shared with The Intercept, reflect the wider accusation that mareros used murderous intimidation tactics to prevent people from voting for LIBRE and vote, instead, for Asfura.

In one of the files, a woman calling from Tegucigalpa tells the operator that members of MS-13 had “threatened to kill” anyone who voted for LIBRE while posing as election observers at the voting center. “They’re outside the voting center, they’re outside and inside,” she says, referring to members of MS-13, her voice trembling. “I entered, and they told me, ‘If you vote for LIBRE, we’ll kill you and your whole fucking family.’”

For days before the election, a resident from a rural region of the country, whose time in a maximum-security prison called La Tolva put him in yearslong proximity to gang members, had received messages from friends and family members living in Tegucigalpa and San Pedro Sula. They all reported a variation of the same story: Gang members on mototaxis informing everyone in their colonias, “ Vamos a votar por Papi a la Órden .” (“We’re going to vote for” Asfura.)

A former mid-level bureaucrat for the LIBRE government told The Intercept that, during the lead-up to the election, “LIBRE activists who promoted the vote … were intimidated by members of gangs so that they would cease pushing for the vote for LIBRE.” The former official didn’t specify the gangs, though they said the intimidation took place in three separate neighborhoods.

“All day, the muchachos [gang members] were going around and taking photos of the coordinators,” read messages from local organizers shared with The Intercept. The gang members “said that they needed to close themselves in their houses.”

Testimony at Hernández’s trial indicated that members of MS-13 were subcontracted as early as 2004 through the corrupt , U.S.-allied police commander Juan Carlos “El Tigre” Bonilla to provide security for caravans of cocaine alongside soldiers. Evidence presented in the trial of Midence Oquelí Martínez Turcios , a former Honduran soldier and longtime congressional deputy for the Liberal Party who was convicted of drug trafficking charges last week, revealed that he trained sicarios for MS-13 to carry out high-level assassinations on behalf of the drug trafficking clan known as the Cachiros. Testifying at Hernández’s 2024 trial, the imprisoned Cachiros leader claimed to have paid $250,000 in protection money to the former president.

Trump wiped away Hernández’s conviction, calling it political theater, but he sees MS-13’s sicarios in a different light. To Trump, the gangsters are human “ animals ,” their gang a “ menace ” that “ violated our borders ” in an “ infestation ” — justifying militarized crackdowns on caravans of Hondurans fleeing violence under Hernández and the categorization of the gang as a foreign terrorist organization. Announcing the designation in February, a White House press release reads: “MS-13 uses public displays of violence to obtain and control territory and manipulate the electoral process in El Salvador.”

“We used to think this was just to influence the mayors, not the presidency.”

“It’s known that MS-13 will do vote buying,” the investigator examining voter intimidation said. “This is a recurring practice. But we used to think this was just to influence the mayors, not the presidency.”

In El Salvador, gangs like MS-13 have intervened in favor of another Trump ally, Nayib Bukele , whose government has been embroiled by scandal over alleged collusion with MS-13 and other gangs — meaning that the in Honduras wasn’t the first time that the same candidate Trump endorsed was promoted by a gang he now designates a terrorist organization.

For Cerpas, the coincidence of that voter intimidation with Hernández’s release is cause for alarm. “The people in Honduras are afraid,” she said, “because organized crime has been emboldened by the pardon of Juan Orlando Hernández.”

Postmortem: Intermittent Failure in SimKube CI Runners

Lobsters
blog.appliedcomputing.io
2025-12-09 23:34:55
Comments...
Original Article
We’re very sorry if your SimKube CI pipeline looked like this at some point in the last week or so. Really, honest.

On Wednesday, November 26, 2025, while testing changes to ACRL’s SimKube CI Runner 1 , an ACRL employee discovered an intermittent failure in the runner. This failure caused approximately 50% of the simulations scheduled on the runner to fail, resulting in failed actions in users’ CI pipelines, which prevented new deploys of mission-critical code. We at ACRL take our responsibility as the world’s leading provider of Kubernetes simulation analysis very seriously, and we understand the severe impact this incident had on users of our CI runner. We deeply apologize for this incident, and are committed to taking whatever actions necessary to restore trust with our customers. In the remainder of this post we will outline the timeline of this incident, a detailed analysis of the underlying causes, and the remediation steps we have taken to prevent a recurrence of this incident.

The aforementioned ACRL employee discovered the issue late Wednesday afternoon on the 26th. However, because the following day was Thanksgiving, the investigation was postponed until the following week under the hypothesis that it was likely a transient error, it’d probably go away if we didn’t look at it too hard, and we had a lot of Thanksgiving food to eat.

On the following Monday (December 1st), during our regularly-scheduled company all-hands, we re-triggered the CI pipeline once and it succeeded, whereupon we decided the problem had fixed itself. It wasn’t until Thursday, December 4th, when the incident re-occurred that we decided to bother spending some time investigating. We then spent most of the afternoon troubleshooting until we found the inciting factors 2 and identified a series of remediations. Those fixes were published at some point later on, when we got around to it.

SimKube is ACRL’s simulation environment for Kubernetes . It is designed to allow organizations to study changes in their production Kubernetes clusters in a safe and isolated environment. One way of using SimKube is as a dedicated step in CI pipeline; this would enable users to check for regressions or bugs in their Kubernetes code before it is deployed.

The SimKube CI runner is published 3 as an Amazon Machine Image (AMI) 4 , which contains a complete SimKube environment. The runner can replay trace files contained in the codebase, and will check the outcome of the simulation to see if it’s Succeeded or Failed . The symptoms of this incident were that periodically, a simulation would report as “failed” after completing its entire run. The SimKube driver pod (the component responsible for running the events in the trace file) would report the following error, along with a stack trace and a panic:

timed out deleting simulation root sk-test-sim-driver-sn295-root

The “simulation root” is a Kubernetes custom resource which acts as a “hook” to hang all the other simulation objects off of. The simulation root exists to make for a one-step clean-up procedure: because of Kubernetes garbage collection , when the root is deleted, all objects owned by the simulation root will also be deleted.

The first step we took in our investigation was to study the trace file running in the simulation. This trace file (also available as an example trace in the SimKube repo) creates a single CronJob , lets it run for three minutes, and then deletes the CronJob . The CronJob is configured to create a new pod every minute, and the pod sleeps for 30 seconds before terminating. This trace file is used to test the pod lifecycle management features of SimKube.

We investigated the log files from all the relevant controllers, including the SimKube driver pod, the Kubernetes controller manager, and the Kubernetes API server. The results were, to use the technical terminology, extremely f*$&ing weird. The SimKube driver pod had dozens of log lines which looked like the following:

INFO mutate_pod: mutating pod (hash=10855072724872030168, seq=66) pod.namespaced_name=”virtual-default/hello-simkube-29414550-tcr49”
INFO mutate_pod: first time seeing pod, adding tracking annotations pod.namespaced_name=”virtual-default/hello-simkube-29414550-tcr49”

What do these lines mean? Well, the SimKube driver registers itself as a mutating webhook so that it can redirect simulated pods to the fake nodes and apply other labels and annotations to them. The hello-simkube pod is the one that’s owned by the simulated CronJob. What’s curious about these log lines is that they repeat over, and over, and over again, even after the CronJob object itself has been deleted! At first we thought this meant that the CronJob hadn’t actually been deleted, but after some further study we realized that the pod name was the same for every single one of these log entries: in other words, the SimKube mutating webhook is trying to mutate the same pod for 10 minutes, well after the simulation was over and everything (supposedly) had been deleted.

The next clue came from the Kubernetes controller manager logs:

 “syncing orphan pod failed” err=<
        Pod “hello-simkube-29414550-tcr49” is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds`, `spec.tolerations` (only additions to existing tolerations), `spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
        @@ -140,7 +140,9 @@
          “TerminationGracePeriodSeconds”: 30,
          “ActiveDeadlineSeconds”: null,
          “DNSPolicy”: “ClusterFirst”,
        - “NodeSelector”: null,
        + “NodeSelector”: {
        +  “type”: “virtual”
        + },
          “ServiceAccountName”: “default”,
          “AutomountServiceAccountToken”: null,
          “NodeName”: “cluster-worker”,
 > logger=”job-controller” pod=”virtual-default/hello-simkube-29414550-tcr49”

This is a standard error that gets returned when something (a user, a controller, etc) tries to update a read-only field. In this case, it’s showing that something is trying to update the pod’s node selector after the pod has already been created, which is not allowed. There are two curious things to note in this log entry: first, the timestamp is after SimKube has deleted the CronJob, and it states that the pod has been orphaned, which means it’s not owned by anything. In other words, the CronJob really was deleted! Secondly, we got lucky in that some of the additional context shows that the pod has been scheduled to a node, that is, cluster-worker . This is not one of our simulated nodes! This is a real node! That shouldn’t happen.

The last clue came from the API server logs, where we discovered that the SimKube driver mutating webhook had been configured to fail open 5 . This means that, if the webhook fails (for whatever reason), the pod object will be allowed through anyways. Specifically, we saw that the webhook was failing because of a certificate error.

The certificate error immediately cast suspicion on cert-manager , which is the component that manages all of the TLS certificates for SimKube. Cert-manager is quite a complex bit of machinery, but is nevertheless required because mutating webhooks must communicate over TLS, which means they need certificates. In SimKube, we create a self-signed certificate issuer for this purpose. Cert-manager is actually a very robust tool, and has the really nice feature that it can auto-inject certificates into your webhook configuration if you apply the cert-manager.io/inject-ca-from annotation, which we do in SimKube. Investigating the cert-manager logs, everything seemed like it was working as designed at first, until we inspected the timestamps more closely. Then these two lines stood out:

I1204 18:29:07.814009 attempting to acquire leader lease kube-system/cert-manager-cainjector-leader-election...
I1204 18:30:11.466829 successfully acquired lease kube-system/cert-manager-cainjector-leader-election

By default, cert-manager, like many other components in Kubernetes, operates in a semi- HA fashion. There is one “leader” pod and a number of hot standby pods. That way, if the leader pod crashes or gets evicted, one of the standby pods can immediately take over. Kubernetes provides a distributed locking mechanism to ensure that only one pod can be the leader at a time. Until the lease is acquired, the cert-manager pod can’t do any work. What’s interesting to note here is that it took almost a minute to acquire the lease; and moreover, the simulation start time on the runner was 18:29:41, which means that the first CronJob pod, created at 18:30:00, was created before the cert-manager injector could provide the SimKube mutating webhook with its certificate.

So that’s one mystery answered: if the webhook didn’t have a certificate, it can’t apply the proper node selector, and because it fails open, the pod gets scheduled onto a real Kubernetes node instead of the intended fake node. But why and how does this pod become orphaned and stick around in the cluster until the SimKube driver times out?

Now that we knew the mechanism for the failure, it was easy to develop a local reproduction: delete the cert-manager injector pod from the cluster, start a simulation, and then after the first CronJob pod was created, recreate the cert-manager injector pod. This simulates 6 the effect of the injector waiting for the lease. In fact, the first time we did this, we didn’t recreate the injector pod until after the simulated-cronjob-sleep-pod-that-got-scheduled-on-a-real-node-by-mistake 7 had finished, and in this case it was correctly cleaned up and the simulation finished as normal.

Repeating the test locally, we observed that the critical failure only occurs if the cert-manager injector pod comes up while the CronJob pod is running . Since we had a reliable way to reproduce the error, we decided to take a quick peek at the kubelet logs and saw this log line repeated over and over again:

Failed to update status for pod” err=”failed to patch status
...
<long status update message>
...
for pod \”virtual-default\”/\”hello-simkube-29414879-r22m5\”:
pods \”hello-simkube-29414879-r22m5\” is forbidden: node \”karpenter-worker\” cannot update labels through pod status”

Aha! This is the last piece of the puzzle: kubelet is trying to update the status of the pod to say that it’s finished running, but it can’t. The error message is slightly weird, it’s saying that kubelet is sending a modification to the pod labels to the pod status endpoint , which is forbidden because pod labels aren’t part of the pod status. What’s strange about this is, if you look at the actual update kubelet is sending, there are no label updates.

I suspect those of you who’ve written admission webhooks are nodding along by now. The flow of data looks like this:

kubelet status update -> API server -> SimKube mutating webhook -> 
API server -> kubelet

In other words: because the SimKube mutating webhook was subscribed to both CREATE and UPDATE events 8 , it intercepted the kubelet’s status update, said “hey, this pod doesn’t have any of the right simulation labels or the proper node-selector on it, lemme add those!” The Kubernetes API server received the modification and said (in the logs) “Hey, you can’t add a node selector on an UPDATE!”, and said (to kubelet) “Hey, you can’t add a label from the /status endpoint!”, and said (to the mutating webhook) nothing 9 . Kubelet continued to retry the status update for the pod every 10 seconds until the simulation driver terminated.

Wait, but why did everything clean up after the simulation crashed? Well, once the simulation driver pod terminated, there was no longer a mutating webhook in place to add labels to the pods based on a status update, so the update went through, Kubernetes realized the pod had completed, and it deleted it to finish its cleanup.

After conducting this detailed analysis, ACRL engineers identified the following remediation steps:

  1. Stop running cert-manager in HA mode, because our one-replica cert-manager injector pod definitely doesn’t need to be spending up to one (1) minute trying to claim a lock that nobody else is holding.

  2. Configure the SimKube driver mutating webhook to fail closed: we basically never want a pod that is designated for a simulated node to get scheduled on a real node, because that could cause all kinds of issues.

  3. Configure the SimKube driver mutating webhook to only listen to pod CREATE events, not UPDATE events. Once the simulated pod is running, the driver never makes any further changes, so there’s no reason to listen for updates.

  4. Modify the SimKube simulation controller to wait for the driver pod to receive its certificate before continuing with simulation setup.

  5. Improve our logging and metrics monitoring infrastructure so that it’s easier to identify and troubleshoot these issues in the future.

As is common with incidents of this nature and scale, there was no single point of failure that caused the issue; had any one of these remediations been in place, the incident would not have occurred. To prevent future recurrence of this issue, and to enable defense in depth, we will prioritize getting these fixes in place at some point in the future when we feel like getting around to it.

ACRL cares strongly about the experience of the zero customers who are using this SimKube CI Runner action. We deeply apologize for the impact that our failure had on your CI pipelines and deploy process, and will be issuing refunds to all zero of customers who tried to use our runner image during the period of this outage. Please feel free to contact our support team if you have any further questions or concerns about this outage, and rest assured we will strive to do better next time.

~drmorr

Discussion about this post

Ready for more?

Microsoft Patch Tuesday, December 2025 Edition

Krebs
krebsonsecurity.com
2025-12-09 23:18:29
Microsoft today pushed updates to fix at least 56 security flaws in its Windows operating systems and supported software. This final Patch Tuesday of 2025 tackles one zero-day bug that is already being exploited, as well as two publicly disclosed vulnerabilities....
Original Article

Microsoft today pushed updates to fix at least 56 security flaws in its Windows operating systems and supported software. This final Patch Tuesday of 2025 tackles one zero-day bug that is already being exploited, as well as two publicly disclosed vulnerabilities.

Despite releasing a lower-than-normal number of security updates these past few months, Microsoft patched a whopping 1,129 vulnerabilities in 2025, an 11.9% increase from 2024. According to Satnam Narang at Tenable , this year marks the second consecutive year that Microsoft patched over one thousand vulnerabilities, and the third time it has done so since its inception.

The zero-day flaw patched today is CVE-2025-62221 , a privilege escalation vulnerability affecting Windows 10 and later editions. The weakness resides in a component called the “ Windows Cloud Files Mini Filter Driver ” — a system driver that enables cloud applications to access file system functionalities.

“This is particularly concerning, as the mini filter is integral to services like OneDrive, Google Drive, and iCloud, and remains a core Windows component, even if none of those apps were installed,” said Adam Barnett , lead software engineer at Rapid7 .

Only three of the flaws patched today earned Microsoft’s most-dire “critical” rating: Both CVE-2025-62554 and CVE-2025-62557 involve Microsoft Office , and both can exploited merely by viewing a booby-trapped email message in the Preview Pane. Another critical bug — CVE-2025-62562 — involves Microsoft Outlook , although Redmond says the Preview Pane is not an attack vector with this one.

But according to Microsoft, the vulnerabilities most likely to be exploited from this month’s patch batch are other (non-critical) privilege escalation bugs, including:

CVE-2025-62458 — Win32k
CVE-2025-62470 — Windows Common Log File System Driver
CVE-2025-62472 — Windows Remote Access Connection Manager
CVE-2025-59516 — Windows Storage VSP Driver
CVE-2025-59517 — Windows Storage VSP Driver

Kev Breen , senior director of threat research at Immersive , said privilege escalation flaws are observed in almost every incident involving host compromises.

“We don’t know why Microsoft has marked these specifically as more likely, but the majority of these components have historically been exploited in the wild or have enough technical detail on previous CVEs that it would be easier for threat actors to weaponize these,” Breen said. “Either way, while not actively being exploited, these should be patched sooner rather than later.”

One of the more interesting vulnerabilities patched this month is CVE-2025-64671 , a remote code execution flaw in the Github Copilot Plugin for Jetbrains AI-based coding assistant that is used by Microsoft and GitHub. Breen said this flaw would allow attackers to execute arbitrary code by tricking the large language model (LLM) into running commands that bypass the guardrails and add malicious instructions in the user’s “auto-approve” settings.

CVE-2025-64671 is part of a broader, more systemic security crisis that security researcher Ari Marzuk has branded IDEsaster (IDE  stands for “integrated development environment”), which encompasses more than 30 separate vulnerabilities reported in nearly a dozen market-leading AI coding platforms, including Cursor , Windsurf , Gemini CLI , and Claude Code .

The other publicly-disclosed vulnerability patched today is CVE-2025-54100 , a remote code execution bug in Windows Powershell on Windows Server 2008 and later that allows an unauthenticated attacker to run code in the security context of the user.

For anyone seeking a more granular breakdown of the security updates Microsoft pushed today, check out the roundup at the SANS Internet Storm Center . As always, please leave a note in the comments if you experience problems applying any of this month’s Windows patches.

Show HN: Gemini 3 imagines Hacker News as a HyperCard stack in 1994

Hacker News
hyper-card-hacker-news.vercel.app
2025-12-09 23:04:02
Comments...

Pete Hegseth Says the Pentagon's New Chatbot Will Make America 'More Lethal'

403 Media
www.404media.co
2025-12-09 23:00:45
The Department of War aims to put Google Gemini 'directly into the hands of every American warrior.'...
Original Article

Secretary of War Pete Hegseth announced the rollout of GenAI.mil today in a video posted to X . To hear Hegseth tell it, the website is “the future of American warfare.” In practice, based on what we know so far from press releases and Hegseth’s posturing, GenAI.mil appears to be a custom chatbot interface for Google Gemini that can handle some forms of sensitive—but not classified—data.

Hegseth’s announcement was full of bold pronouncements about the future of killing people. These kinds of pronouncements are typical of the second Trump administration which has said it believes the rush to “win” AI is an existential threat on par with the invention of nuclear weapons during World War II.

Hegseth, however, did not talk about weapons in his announcement. He talked about spreadsheets and videos. “At the click of a button, AI models on GenAI can be used to conduct deep research, format documents, and even analyze video or imagery at unprecedented speed,” Hegseth said in the video on X. Office work, basically. “We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever before.”

Emil Michael, the Pentagon’s under secretary for research and engineering, also stressed how important GenAI would be to the process of killing people in a press release about the site’s launch.

“There is no prize for second place in the global race for AI dominance. We are moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America's next Manifest Destiny, and we're ensuring that we dominate this new frontier,” Michael said in the press release, referencing the 19th century American belief that God had divinely ordained Americans to settle the west at the same time he announced a new chatbot.

The press release says Google Cloud's Gemini for Government will be the first instance available on the internal platform. It’s certified for Controlled Unclassified Information, the release states, and claims that because it’s web grounded with Google Search–meaning it’ll pull from Google search results to answer queries–that makes it “reliable” and “dramatically reduces the risk of AI hallucinations.” As we’ve covered, because Google search results are also consuming AI content that contains errors and AI-invented data from across the web, it’s become nearly unusable for regular consumers and researchers alike.

During a press conference about the rollout this morning, Michael told reporters that GenAI.mil would soon incorporate other AI models and would one day be able to handle classified as well as sensitive data. As of this writing, GenAI’s website is down.

“For the first time ever, by the end of this week, three million employees, warfighters, contractors, are going to have AI on their desktop, every single one,” Michael told reporters this morning, according to Breaking Defense . They’ll “start with three million people, start innovating, using building, asking more about what they can do, then bring those to the higher classification level, bringing in different capabilities,” he said.

The second Trump administration has done everything in its power to make it easier for the people in Silicon Valley to push AI on America and the world. It has done this, in part, by framing it as a national security issue. Trump has signed several executive orders aimed at cutting regulations around data centers and the construction of nuclear power plants. He’s threatened to sign another that would block states from passing their own AI regulations . Each executive order and piece of proposed legislation threatens that losing the AI race would mean making America weak and vulnerable and erode national security.

The country’s tech moguls are rushing to build datacenters and nuclear power plants while the boom time continues. Nevermind that people do not want to live next to datacenters for a whole host of reasons. Nevermind that tech companies are using faulty AIs to speed up the construction of nuclear power plants. Nevermind that the Pentagon already had a proprietary LLM it had operated since 2024 .

“We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm,’ Hegseth said in the press release about GenAI.mil. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department."

About the author

Matthew Gault is a writer covering weird tech, nuclear war, and video games. He’s worked for Reuters, Motherboard, and the New York Times.

Matthew Gault

OpenEvolve: Teaching LLMs to Discover Algorithms Through Evolution

Hacker News
algorithmicsuperintelligence.ai
2025-12-09 22:54:33
Comments...
Original Article

OpenEvolve: Teaching LLMs to Discover Algorithms Through Evolution

How do we teach machines to discover algorithms? Traditional approaches rely on hand-crafted heuristics, exhaustive search, or gradient-based optimization. But what if we could harness the creative potential of large language models (LLMs) within an evolutionary framework?

OpenEvolve is an open-source evolutionary coding agent that integrates large language models into a quality-diversity search framework for algorithm discovery. Candidate programs are produced via LLM-guided edits (diff-based by default), evaluated with user-defined metrics, and organized using MAP-Elites while an island model with migration supports parallel, diversified exploration. The evaluation pipeline supports cascade staging and an artifact side-channel that feeds execution traces and errors back into subsequent prompts; optional LLM-based feedback can be incorporated into scoring.

OpenEvolve has been applied across many domains—here are a few examples: systems optimization , scientific discovery , geospatial algorithms , scaling law discovery , GPU kernel optimization , prompt optimization , and more.


Architecture Overview

OpenEvolve Architecture

Figure 1: OpenEvolve architecture showing the five interconnected components of the evolution loop

The Evolution Loop

  • Prompt Sampler: Constructs context-rich prompts by selecting a parent program from the current island and curating evidence sets (top performers by fitness, lineage ancestors, diverse extremes across feature bins, and random samples). Prompts include the parent's code, evaluation metrics, feature coordinates for MAP-Elites, evolution history, and (optionally) execution artifacts. Template selection supports diff-based editing by default or full rewrites, with controlled stochasticity.
  • LLM Ensemble: Generates candidate code using a weighted ensemble of OpenAI-compatible models (deterministic under seeds). In standard mode, a model is sampled by weight; in model-based islands, each island uses a fixed model. Responses drive either diff-based edits (SEARCH/REPLACE blocks) or full rewrites (JSON/code-block extraction), with generation parameters drawn from configuration.
  • Evaluator: Executes the user-provided evaluate(program_path) with timeouts and retries; optionally applies cascade evaluation ( evaluate_stage1/2/3 ) with thresholds to filter weak candidates early. It can incorporate LLM-based feedback into metrics and captures artifacts (e.g., stderr, tracebacks) for subsequent prompt context. Parallel evaluations are supported via an internal task pool.
  • Program Database: Implements MAP-Elites per island, binning programs along configurable feature dimensions (defaults include complexity and diversity; custom dimensions are taken from evaluator metrics). New candidates replace cell occupants when fitness improves (preferring combined_score , otherwise a safe numeric aggregate excluding feature dimensions). The database enforces population limits, tracks the global best, logs prompts, supports migration, and persists checkpoints.
  • Controller: Orchestrates the loop, including seeding, logging, prompt/evaluator initialization, and process-based parallel execution. It schedules iterations across islands, manages checkpointing and resume, enforces early stopping/target score criteria, stores artifacts, and writes the best discovered program and its metadata to the output directory.

Key Algorithmic Innovations

Island-Based Evolution with Lazy Migration

OpenEvolve maintains multiple isolated populations (islands) that evolve independently to reduce premature convergence and enable parallel exploration. Migration is event-driven: each island migrates when its per-island program additions since the last migration reach a configured interval, rather than on wall-clock time. Migration follows a ring topology by default (optional random migration), transferring a fraction of top programs while avoiding duplicate code in the destination island.

# Configuration example
database:
  num_islands: 5
  migration_interval: 20   # generations, not iterations
  migration_rate: 0.1      # 10% of top programs migrate

MAP-Elites for Diversity Preservation

Each island maintains a MAP-Elites grid over configurable feature dimensions (defaults include complexity and diversity; additional dimensions can be supplied by the evaluator). A candidate occupies or replaces the cell if it improves fitness (preferring combined_score , otherwise a safe aggregate over numeric metrics excluding feature dimensions). This enforces one elite per cell and preserves quality-diversity. The system also avoids exact duplicates (e.g., during migration) and computes diversity using structural measures (e.g., edit distance), rather than relying on code embeddings.

Cascade Evaluation

Evaluation proceeds in stages with configurable thresholds. If cascade functions are provided, Stage 1 performs fast checks (e.g., import/execute), Stage 2 runs lightweight tests, and Stage 3 executes comprehensive benchmarks. Candidates must meet stage thresholds to advance. Timeouts and exceptions are captured as artifacts and can be fed back into subsequent prompts. When cascade functions are not defined, evaluation falls back to a single-stage evaluate(program_path) with timeouts and retries.

Double-Selection Strategy

Parent selection is biased toward high-fitness programs, while inspiration material shown to the LLM is drawn from complementary sources (top programs, lineage ancestors, diverse extremes across feature bins, and random samples). This separation encourages improvements guided by the current best while maintaining exploration pressure via diverse exemplars, implemented through prompt construction rather than direct recombination.


Sample Use Cases

Example 1: Algorithmic Discovery

On the AlgoTune benchmark, OpenEvolve discovered algorithms achieving dramatic speedups through automatic optimization:

Algorithmic Discovery Results

Figure 2: Algorithmic discovery results showing dramatic speedups on the AlgoTune benchmark

Key breakthroughs include automatic discovery of JAX JIT compilation (321x), FFT-based convolution (256x), and optimized graph algorithms (95.78x). The system evolved from simple iterative implementations to sophisticated numerical computing patterns without human intervention. For more detailed analysis, see Towards Open Evolutionary Agents .

Example 2: Circle Packing

OpenEvolve matched state-of-the-art results (2.634 sum of radii for n=26), evolving from naive geometric constructions to discovering scipy.optimize with SLSQP—a completely different algorithmic approach than the initial solution.

Example 3: GPU Kernel Optimization

Evolution of Metal GPU kernels for transformer attention on Apple Silicon:

GPU Kernel Performance

Figure 3: GPU kernel performance improvements for transformer attention on Apple Silicon

OpenEvolve discovered several non-obvious optimizations:

  • 8-element SIMD vectorization matching Apple Silicon's hardware width
  • Two-pass online softmax reducing memory bandwidth
  • GQA-specific memory layouts exploiting head structure

These optimizations maintain 100% numerical accuracy while achieving measurable performance improvements across diverse inference scenarios. For more details, see GPU Kernel Discovery .

Example 4: LLM Prompt Optimization

Beyond code, OpenEvolve can evolve prompts themselves:

Prompt Optimization Results

Figure 4: Prompt optimization results on GEPA benchmarks

On GEPA benchmarks, evolved prompts achieved +10.69% accuracy on HotpotQA (multi-hop reasoning) and +6.42% overall across multiple benchmarks. This demonstrates OpenEvolve's versatility—the same evolutionary framework optimizes both code and natural language.

Evolution Progress: As shown below on the AlgoTune benchmark, we see that the performance consistently improves over generations. Extended evolution (200 iterations) achieved 24% better results than shorter runs (100 iterations), suggesting that patient exploration of the solution space yields compounding benefits.

Evolution Progress

Figure 5: Performance improvement over generations showing compounding benefits of extended evolution


Getting Started

OpenEvolve provides both library and command-line interfaces:

from openevolve import run_evolution

result = run_evolution(
    initial_program="def solve(x): return x * 2",
    evaluator=lambda path: {"score": benchmark(path)},
    iterations=100
)

For complex configurations, use YAML files specifying LLM models, evolution strategies, and evaluation parameters. OpenEvolve supports checkpoint/resume for long-running experiments and parallel evaluation across multiple cores. OpenEvolve is open-source and available on GitHub .

Update: This blog post was updated on November 1, 2025

The Beautiful Game Is Getting Ugly

Portside
portside.org
2025-12-09 22:53:36
The Beautiful Game Is Getting Ugly Judy Tue, 12/09/2025 - 17:53 ...
Original Article

WASHINGTON – Inside the Kennedy Center, FIFA President Gianni Infantino was hanging a gold metal around the neck of his “close friend” Donald Trump and giving him a hefty golden peace trophy.

“The FIFA Peace Prize is awarded annually,” Infantino said of the new award nobody has ever won before from his supposedly apolitical organization. The pair were at the landmark arts venue for a floor show–like event before officials drew brackets of teams to play this summer’s World Cup tournament, to be held in Canada, Mexico, and the United States. FIFA took over the building for free and displaced scheduled National Symphony Orchestra concerts, according to The Washington Post ; it closed down nearby streets for multiple blocks and forced drivers into inconvenient detours.

“This is truly one of the great honors of my life. And beyond awards, we saved millions and millions of lives,” Trump said in a kind of acceptance speech, referring to a series of conflicts around the world he likes to say he stopped, despite contradictory statements from the people involved in the conflicts . “The fact that we could do that, so many different wars that were able to end in some cases right before they started, it was great to get them done,” he continued, claiming without proof that “the world is a safer place now.”

Outside was an entirely different scene. Dozens of protesters had gathered as close as they could to the Kennedy Center, holding up soccer-style red cards that said, “deadly sanctions,” “bombing boats,” and “racist travel bans.” They flew a giant Palestinian flag and mourned the at least 437 Gazan soccer players murdered in the ongoing U.S.-backed Israeli genocide, according to the Palestinian Football Association . The dead include Suleiman al-Obeid , the Pele of Palestine, whom Israeli soldiers murdered as he waited in line for food in Rafah.

Protesters had a cartoonishly huge soccer ball, which they pushed into a stack of oversized cardboard ice cubes. “No ICE in my cup!” another series of signs said. Big snowflakes showered down. A phalanx of cops stood within throwing distance. Protesters set up a table with hot cocoa.

The protest, organized by activists and soccer fans, including the group Get Free, want white supremacy out of soccer. Trump colludes with FIFA billionaires, the group argues, and is using “the beautiful game” to promote his message and vision ahead of America’s 250th birthday. It’s one of the many protests that are expected ahead of next summer’s World Cup games, said political observers who study sports. The way FIFA conducts itself amid Trump’s immigration terror campaign and the GOP’s decision to slam the doors shut on immigration of all kinds, including tourism, will act as a preview for how the administration will treat the Summer Olympics in Los Angeles in 2028, they said.

“Soccer is about equality and freedom of movement,” said Anthony Torres, a spokesperson for Get Free. But Trump, he said, is erasing that, just as he’s erasing the history and presence of Black and brown people in the U.S., filling up his detention gulags and “bringing us back to the heyday of Jim Crow.”

Get Free is calling for the World Cup to be a platform for humanity, Torres said, and for World Cup leadership to stand up against white supremacy. Protesters from other organizations turned out, too.

“We’re here to send a clear message to Trump that you can’t reconcile ICE with soccer culture,” said Slobodan Milic, a protester with Free DC and avid fan of D.C. United, the local MLS team. “Soccer is the most democratic game there is. Everyone tries to hit the ball.”

After chatting briefly about PSV Eindhoven’s recent upset win over Liverpool—“It was on Liverpool’s own turf!”—Milic returned to politics. Along with his fellow D.C. United fans, he said, he spends the 51st minute of every game chanting, “Free D.C.!” in reference to the long-standing push to make D.C. the 51st state. Sometimes, the chant takes over the whole stadium . Now he’s directing his energy toward keeping ICE away from the World Cup. The goal, he said, is to “abolish ICE during the World Cup and get them out of all of our cities.”

World Cup matches will be played in 11 U.S. cities , including Los Angeles, Miami, and New York, all of which have large immigrant populations. Earlier this year, Los Angeles was the site of major ICE operations and counterprotests, and advocates worry that Trump could use the World Cup as a pretense to incite more raids and violence. Protesters like Milic fear that residents without citizenship could be targeted as they try to enjoy the festivities and games next summer.

Those fears are well-founded: Just this July, an asylum seeker was arrested and handed over to ICE after bringing his two children, aged 10 and 14, to watch FIFA’s Club World Cup final in New Jersey. After three months in immigration detention, the man decided not to appeal when a judge rejected his asylum claim, prioritizing leaving detention above all else. He was returned to his country of origin, according to Human Rights Watch.

Closer to the Kennedy Center, soccer fans lined up for hours to get inside, and by noon a few were still making their way through the snow to get in. They, too, were concerned about the upcoming event. One fan, who declined to share his name and who planned to watch the draw at a coffee shop nearby, said he was supporting the team from Iran.

Under Trump’s travel ban, Iranian officials have been barred from coming into the U.S. since June. Trump’s executive order made exemptions for athletes, support staff, and immediate relatives for the World Cup event, but not necessarily for Friday’s draw. Iranian officials had planned to boycott the draw, but Iranian media on Thursday reported that the team’s coach would attend, according to the Associated Press .

The fan stood outside the security line, holding hands with a woman as snowflakes gathered on their thick jackets. If “softer” forms of diplomacy, like international sporting events, have the exact same goal as “harder” forms, “then I’m not sure it’s such an amazing goal,” he said before heading indoors. “But if it’s in the spirit of brotherhood, then that’s great.”

Jules Boykoff, professor and department chair in the department of political science at Pacific University and an ex-professional soccer player, said in a phone interview that among the open questions for the June matches are not just whether Trump’s anti-immigration policies will affect players. It’s also unclear how those policies will affect international fans.

He said he didn’t know how someone from Latin America could come into a U.S. sports event, given that the Supreme Court just ruled that Immigration and Customs Enforcement can use race as a reason to disappear someone. “You gotta be a real sports nut to do that,” he said.

Boykoff added that he doubted whether FIFA would take any steps to meet the kinds of demands demonstrators were making.

“Gianni Infantino has been Trump’s number one enabler. FIFA is not going to engage in anything resembling isolating Trump,” he said. “The fact that the World Cup draw is in D.C. is a testament to that. They made it as easy as possible for Trump to be there.”

The protesters kept chanting even as the event got under way and the snow came down heavier. They took turns kicking the massive inflatable soccer ball into the paper blocks of “ICE,” which went tumbling down onto the concrete.

They cheered. “Gooooooooaaaaal!”

===

Whitney Curry Wimbish is a staff writer at The American Prospect. She previously worked in the Financial Times newsletters division, The Cambodia Daily in Phnom Penh, and the Herald News in New Jersey. Her work has been published in multiple outlets, including The New York Times, The Baffler, Los Angeles Review of Books, Music & Literature, North American Review, Sentient, Semafor, and elsewhere. She is a coauthor of The Majority Report’s daily newsletter and publishes short fiction in a range of literary magazines.

Emma Janssen is a writing fellow at The American Prospect, where she reports on anti-poverty policy, health, and political power. Before joining the Prospect, she was at UChicago studying political philosophy, editing for The Chicago Maroon, and freelancing for the Hyde Park Herald.

===

Linux CVEs, more than you ever wanted to know

Hacker News
www.kroah.com
2025-12-09 22:47:36
Comments...
Original Article

It’s been almost 2 full years since Linux became a CNA (Certificate Numbering Authority) which meant that we (i.e. the kernel.org community) are now responsible for issuing all CVEs for the Linux kernel. During this time, we’ve become one of the largest creators of CVEs by quantity, going from nothing to number 3 in 2024 to number 1 in 2025. Naturally, this has caused some questions about how we are both doing all of this work, and how people can keep track of it.

I’ve given a number of talks over the past years about this, starting with the Open Source security podcast right after we became a CNA and then the Kernel Recipes 2024 talk, “CVEs are alive, but do not panic” and then a talk at OSS Hong Kong 2024 about the same topic with updated numbers and later a talk at OSS Japan 2024 with more info about the same topic and finally for 2024 a talk with more detail that I can’t find the online version .

In 2025 I did lots of work on the CRA so most of my speaking over this year has been about that topic , but the CVE assignment work continued on, evolving to meet many of the issues we had in our first year of being a CNA. As that work is not part of the Linux kernel source directly, it’s not all that visable to the normal development process, except for the constant feed on the linux-cve-announce mailing list I figured it was time to write down how this is all now working, as well a bunch of background information about how Linux is developed that is relevant for how we do CVE reporting (i.e. almost all non-open-source-groups don’t seem to know how to grasp our versioning scheme.)

There is a in-kernel document that describes how CVEs can be asked for from the kernel community, as well as a basic summary of how CVEs are automatically asigned. But as we are an open community, it’s good to go into more detail as to how all of us do this work, explaining how our tools have evolved over time and how they work, why some things are the way they are for our releases, as well as document a way that people can track CVE assignments on their own in a format that is, in my opinion, much simpler than attempting to rely on the CVE json format (and don’t get me started on NVD…)

So here’s a series of posts going into all of this, hopefully providing more information than you ever wanted to know, which might be useful for other open source projects as they start to run into many of the same issues we have already dealt with (i.e. how to handle reports at scale):

★ iMessage’s Delivery Architecture Makes It Hard to Block Without Blocking All iOS Push Notifications

Daring Fireball
daringfireball.net
2025-12-09 22:42:45
Draw your own conclusions about cellular carriers and enterprise network administrators being similar to authoritarian governments....
Original Article

From Apple’s iMessage Security Overview :

Apple iMessage is a messaging service for iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro. Relying on the Apple Push Notification service (APNs), iMessage lets users send texts and attachments like photos, contacts, locations, links, and emoji. Messages sync across all devices, enabling seamless conversations. Apple doesn’t store message content or attachments, which are all secured with end-to-end encryption so that no one but the sender and receiver can access them. Apple canʼt decrypt the data.

This thread on Mastodon , prompted by my wondering why Russia is blocking FaceTime but not iMessage , suggests that because iMessage messages are sent via APNs, a network (or entire nation) seeking to block iMessage can only do by blocking all push notifications for iOS. That’s why on airplanes with “free messaging” on in-flight Wi-Fi, you usually also get all incoming push notifications, even for services that aren’t available on the free Wi-Fi.

Here’s a support document from GFI Software , which makes network appliances for enterprises and schools:

The Exinda appliance gives administrators multiple options to stop or throttle applications that can use a lot of bandwidth in the network. An application that many would consider discardable or able to be easily limited in bandwidth is iMessage. When blocking or discarding iMessage traffic, users may experience an issue where all push notifications on iOS devices that have traffic going through the Exinda, i.e., on WiFi, will stop displaying.

Root Cause: Apple uses the Apple Push Notification Service (APNS) to allow application creators to push out information to iOS devices. This includes mail servers being able to push out notifications of calendar and email, or app creators to be able to push text-based messages straight to the device.

Apple might have architected iMessage this way to make iMessage veto-proof with cellular carriers, who, at the time of iMessage’s announcement in June 2011 , were already promoting iPhone push notifications as a reason to upgrade from a dumb phone to an iPhone with a more expensive plan. The carriers might have been tempted to block iMessage over cell networks to keep people using SMS, but they couldn’t without blocking all push notifications, which wouldn’t be tenable. But this architecture also makes iMessage hard to block in authoritarian countries where iPhones are even vaguely popular. (Maybe this helps explain why iMessage isn’t blocked in China, too?)

Draw your own conclusions about cellular carriers and enterprise network administrators being similar to authoritarian governments.

Tufts Student Can Resume Research After Trump Officials Revoked Her Visa, Judge Rules

Portside
portside.org
2025-12-09 22:41:57
Tufts Student Can Resume Research After Trump Officials Revoked Her Visa, Judge Rules Judy Tue, 12/09/2025 - 17:41 ...
Original Article
Tufts Student Can Resume Research After Trump Officials Revoked Her Visa, Judge Rules Published

Rümeysa Öztürk in Boston, Massachusetts, on 10 May. | Faith Ninivaggi/Reuters

A federal judge has allowed a Tufts University student from Turkey to resume research and teaching while she deals with the consequences of having her visa revoked by the Trump administration , leading to six weeks of detention.

Rümeysa Öztürk, a PhD student studying children’s relationship to social media, was among the first people arrested as the Trump administration began targeting foreign-born students and activists involved in pro-Palestinian advocacy. She had co-authored an op-ed criticizing her university’s response to Israel and the war in Gaza. Immigration enforcement officers took her away in an unmarked vehicle, in an encounter caught on video in March outside her Somerville residence.

Öztürk has been out of a Louisiana immigrant detention center since May and back on the Tufts campus. But she has been unable to teach or participate in research as part of her studies because of the termination of her record in the government’s database of foreign students studying temporarily in the US.

In her ruling on Monday, chief US district judge Denise J Casper wrote that Öztürk is likely to succeed on claims that the termination was “arbitrary and capricious, contrary to law and in violation of the First Amendment”.

The government’s lawyers unsuccessfully argued that the Boston federal court lacked jurisdiction and that Öztürk’s Student and Exchange Visitor Information System record (Sevis) record was terminated legally after her visa was revoked, making her eligible for removal proceedings.

“There’s no statute or regulation that’s been violated by the termination of the SEVIS record in this case,” Mark Sauter, an assistant US attorney, said during a hearing last week. The Associated Press sent an email on Tuesday seeking comment from Sauter on whether the government plans to appeal.

In a statement, Öztürk, who plans to graduate next year, said while she is grateful for the court’s decision, she feels “a great deal of grief” for the education she has been “arbitrarily denied as a scholar and a woman in my final year of doctoral studies”.

“I hope one day we can create a world where everyone uses education to learn, connect, civically engage and benefit others – rather than criminalize and punish those whose opinions differ from our own,” said Öztürk, who is still challenging her arrest and detention.

===

SAP fixes three critical vulnerabilities across multiple products

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 22:41:26
SAP has released its December security updates addressing 14 vulnerabilities across a range of products, including three critical-severity flaws. [...]...
Original Article

SAP

SAP has released its December security updates addressing 14 vulnerabilities across a range of products, including three critical-severity flaws.

The most severe (CVSS score: 9.9) of all the issues is CVE-2025-42880 , a code injection problem impacting SAP Solution Manager ST 720.

"Due to missing input sanitation, SAP Solution Manager allows an authenticated attacker to insert malicious code when calling a remote-enabled function module," reads the flaw's description.

"This could provide the attacker with full control of the system, hence leading to high impact on confidentiality, integrity, and availability of the system."

SAP Solution Manager is the vendor's central lifecycle management and monitoring platform used by enterprises for system monitoring, technical configuration, incident and service desk, documentation hub, and test management.

The next most severe flaw SAP fixed this month concerns multiple Apache Tomcat vulnerabilities impacting SAP Commerce Cloud components in versions HY_COM 2205, COM_CLOUD 2211, and COM_CLOUD 2211-JDK21.

The flaws are tracked in SAP Commerce Cloud under a single identifier, CVE-2025-55754 , given a CVSS severity rating of 9.6.

SAP Commerce Cloud is an enterprise-grade e-commerce platform backing large-scale online stores with product catalogs, pricing, promotions, checkout, order management, customer accounts, and ERP/CRM integration. It is generally used by large retailers and global brands.

The third critical (CVSS score: 9.1) flaw fixed this month is CVE-2025-42928 , a deserialization vulnerability impacting SAP jConnect, which, under certain conditions, could allow a high-privileged user to achieve remote code execution on the target via specially crafted input.

SAP jConnect is a JDBC driver used by developers and database administrators to connect Java applications to SAP ASE and SAP SQL Anywhere databases.

SAP's December 2025 bulletin also lists fixes for five high-severity flaws and six medium-severity issues, including memory corruption, missing authentication and authorization checks, cross-site scripting, and information disclosure.

SAP solutions are deeply embedded in enterprise environments and manage sensitive, high-value workloads, making them a valuable target for attackers.

Earlier this year, SecurityBridge researchers observed in-the-wild attacks abusing a code-injection flaw (CVE-2025-42957) impacting SAP S/4HANA, Business One, and NetWeaver deployments.

SAP has not marked any of the 14 flaws as actively exploited in the wild, but administrators should deploy the fixes without delay.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Agentic AI Foundation

Simon Willison
simonwillison.net
2025-12-09 22:24:48
Agentic AI Foundation Announced today as a new foundation under the parent umbrella of the Linux Foundation (see also the OpenJS Foundation, Cloud Native Computing Foundation, OpenSSF and many more). The AAIF was started by a heavyweight group of "founding platinum members" ($350,000): AWS, Anthropi...
Original Article

Agentic AI Foundation . Announced today as a new foundation under the parent umbrella of the Linux Foundation (see also the OpenJS Foundation, Cloud Native Computing Foundation, OpenSSF and many more ).

The AAIF was started by a heavyweight group of "founding platinum members" ( $350,000 ): AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI. The stated goal is to provide "a neutral, open foundation to ensure agentic AI evolves transparently and collaboratively".

Anthropic have donated Model Context Protocol to the new foundation, OpenAI donated AGENTS.md , Block donated goose (their open source, extensible AI agent ).

Personally the project I'd like to see most from an initiative like this one is a clear, community-managed specification for the OpenAI Chat Completions JSON API - or a close equivalent. There are dozens of slightly incompatible implementations of that not-quite-specification floating around already, it would be great to have a written spec accompanied by a compliance test suite.

The Real Problem of Humanity

Daring Fireball
www.harvardmagazine.com
2025-12-09 22:17:20
Sociobiologist Edward O. Wilson, back in 2009: The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. A related adage I heard, and internalized, recently: “We’re not thinking creatures who feel; we’re feeling creatures who t...
Original Article

“Stamp collectors” was the derisive term future Nobel laureate James Watson applied to Harvard biology professors involved in classification and anatomy in the 1950s.  The co-discoverer of DNA’s double helix structure, then in his twenties, had little tolerance for other approaches to biological science, and thought Harvard, where he had just been named an associate professor, should not waste a tenured position on subjects such as taxonomy and ecology. “Anyone who would hire an ecologist is out of his mind,” he once said.

The great sociobiologist E. O. Wilson, a year  younger than Watson, was one of the “stamp collectors.” And Harvard offered him tenure (countering an offer from Stanford) before offering it to Watson (who everyone knew would win the Nobel Prize). The biology department voted to defer Watson’s appointment, wanting to “get to know him better.” He did not react calmly, even though he was soon granted tenure. Wilson, recalling those days in his 1994 autobiographical book Naturalist, judged Watson the most mean-spirited academic he knew during his early years on the Harvard faculty. So began the rivalry of two scientists who have changed our understanding of life on Earth.

On September 9, in a sold-out event at Sanders Theatre, Wilson and Watson, who have since buried the hatchet, recalled the great division in the biological sciences in the 1950s: on the one hand, the organismic and evolutionary biologists; on the other, Watson—who was leading the revolution in molecular biology and agitating for the hiring of a critical mass of talent in his nascent field. The molecular biologists, Wilson recalled, “landed in the biology department like aliens in Manhattan,” with Watson as the young avatar of their movement.  The two also reflected on the modern reunification of their field, and on future challenges to the field and to the planet.

The event coincided with the 150th anniversary of the Harvard Museum of Natural History—a longtime hangout of  “stamp collectors,” whose methods and collections have proven unexpectedly critical and invaluable in the molecular age of biology—and also with the 150 th anniversary of the publication of Darwin’s On the Origin of Species. Robert Krulwich, a correspondent for National Public Radio’s Science Desk, moderated the discussion.

Krulwich, quoting one of Watson’s former students on Watson’s belief that to be good, “you have to have an enemy,” asked if he really needed to proceed as though leading a bunch of marines. Yes, said Watson. “It works with boys, anyway.” Krulwich also prodded the outwardly mild-mannered Wilson, reminding him that he once said he had been “blessed with brilliant enemies.”

Yes, said Wilson, “and I am the only scientist in modern times to have been physically attacked for an idea”—that idea being his theory that there is a biological basis for human nature, for which radical leftists once poured water on him at a conference. “Top that, Jim,” crowed Wilson. “Ambition and competitiveness,” he continued, “are essential to do really good work. I didn’t show it as much as Jim because I am a Southerner.”

Watson, in fact, attributed their eventual reconciliation to the fact that “I hated Ed’s enemies.” But there were also larger changes in the field of biology that, over time, brought the two scientists’ world views closer together, a gradual sintering of the field. “Molecular biology had a bacterial explosion…,” Wilson explains. “But the result of this was that as molecular and cell biology matured, it produced an armamentarium of methods, ideas, and so on, which we [the “stamp collectors”] started grabbing hold of.” Before long, evolutionary and organismic biologists, who study diversity, were “still collecting bugs, but we were moving down in our analyses to include genomics, while molecular biologists started going evolutionary.” The “glory of late twentieth-century biology,” he pointed out, “is that it is unifying.”

Krulwich then asked about the future. Watson said he hoped that cancer would be cured by 2020, and pointed to studies of how the brain works and how life began as two of the most promising areas of research in the biological sciences. Wilson agreed, but would himself, he said, embark on a new study of diversity, this time in the virtually unexplored world of microbes.

Will we solve the crises of next hundred years? asked Krulwich. “Yes, if we are honest and smart,” said Wilson. “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” Until we understand ourselves, concluded the Pulitzer-prize winning author of On Human Nature, “until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago—Where do we come from? Who are we? Where are we going?—rationally,” we’re on very thin ground.

Related content from Harvard Magazine:

A profile of Wilson

A review of Watson's Avoid Boring People

A review of Wilson's Consilience

Wilson on Darwin

Multiplying our way out of division

Lobsters
xania.org
2025-12-09 21:58:34
Comments...
Original Article

Written by me, proof-read by an LLM.
Details at end.

I occasionally give presentations to undergraduates, and one of my favourites is taking the students on a journey of optimising a “binary to decimal” routine 1 . There are a number of tricks, which I won’t go in to here, but the opening question I have is “how do you even turn a number into its ASCII representation?”

If you’ve never stopped to think about it, take a moment now to do so, it can be a fun problem.

The simple 2 approach is to use number % 10 to get the rightmost digit (adding 48 to turn it into the ASCII number), then divide by ten, and keep going until the number is zero. This produces the digits backwards 3 , but you can reverse them afterwards, which I won’t show here. This routine is one of the few legitimate uses of do while in C, as we always want to emit at least one digit even if the number is zero to start with. The code looks something like:

Here the compiler does a fantastic job. Yesterday we saw how division by powers of two can be optimised to shifts; today we’ll see the compiler manages to avoid expensive division even when we aren’t dividing by a power of two. It also gets the remainder cheaply, which we usually get for free from the divide instruction.

The transformation is quite clever - let’s walk through the annotated assembly:

to_decimal_backwards(unsigned int, char*):
  mov rax, rsi              ; rax = buf
  mov esi, 3435973837       ; esi = 0xcccccccd
.L2:
  mov edx, edi              ; edx = number
  mov ecx, edi              ; ecx = number
  add rax, 1                ; ++buf

  imul rdx, rsi             ; rdx *= 0xcccccccd
  shr rdx, 35               ; rdx = rdx >> 35
                            ; rdx = number / 10 [see below]

  lea r8d, [rdx+rdx*4]      ; r8 = rdx * 5
  add r8d, r8d              ; r8 = rdx * 5 * 2 = rdx * 10
                            ; r8 = (number / 10) * 10 [rounded down]
  sub ecx, r8d              ; ecx = number - (number / 10) * 10
                            ; ecx = number % 10
  add ecx, 48               ; ecx = '0' + (number % 10)
  cmp edi, 9                ; number > 9?
  mov edi, edx              ; number = number / 10
  mov BYTE PTR [rax-1], cl  ; *(buf-1) = '0' + (number % 10)
  ja .L2                    ; loop if number (prior to divide) >= 10
  ret

There’s a lot to unpack here, several different optimisations, but the main one is how the compiler has turned division by a constant ten into a multiply and a shift. There’s a magic constant 0xcccccccd and a shift right of 35! Shifting right by 35 is the same as dividing by 2 35 - what’s going on? 4

Let’s see what happens each step of the algorithm:

>>> 1234 * 0xcccccccd
4239991714858
>>> 4239991714858 // (2**35)
123
>>> 123 * 10
1230
>>> 1234 - 1230
4

What’s happening is that 0xcccccccd / 2**35 is very close to ⅒ (around 0.10000000000582077). By multiplying our input value by this constant first, then shifting right, we’re doing fixed-point multiplication by ⅒ - which is division by ten. The compiler knows that for all possible unsigned integer values, this trick will always give the right answer. For other values and signednesses, sometimes it needs to account for rounding, for example, dividing a signed value by three:

Here we see that it has to account for rounding. If you edit the code above and try dividing by fifteen, you’ll see that causes even more code to be emitted. However, it’s all still faster than a real divide instruction.

Back to our ASCII conversion example, to get the remainder (the modulus); the compiler takes the (truncated) number / 10 , multiplies it back up by 10 using lea tricks (we’ve covered this before ), and then the difference between the original number and this computed value is the remainder.

The rest of the optimisations are the compiler trying to do work eagerly (like incrementing buf ), and checking one loop iteration ahead: there’s no point looping if the current number is less than or equal to 9.

Overall, some very clever optimisations that avoid division entirely!

See the video that accompanies this post.


This post is day 7 of Advent of Compiler Optimisations 2025 , a 25-day series exploring how compilers transform our code.

This post was written by a human ( Matt Godbolt ) and reviewed and proof-read by LLMs and humans.

Support Compiler Explorer on Patreon or GitHub , or by buying CE products in the Compiler Explorer Shop .

Posted at 06:00:00 CST on 7 th December 2025.

That Frank Seddio Lawsuit Over the Missing Millions Is Only Getting More Convoluted

hellgate
hellgatenyc.com
2025-12-09 21:43:57
A judge gave the escrow lawyer one day to provide proof that $2 million are still in the escrow account. Almost a week later, he hasn’t done so....
Original Article
That Frank Seddio Lawsuit Over the Missing Millions Is Only Getting More Convoluted
Then-Kings County Democratic Party Chair Frank Seddio at Junior's in Brooklyn on November 4, 2016. (Shutterstock)

Scott's Picks:

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Hell Gate.

Your link has expired.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.

Qt, Linux and everything: Debugging Qt WebAssembly

Hacker News
qtandeverything.blogspot.com
2025-12-09 21:19:37
Comments...
Original Article

One of the most tedious tasks a developer will do is debugging a nagging bug. It's worse when it's a web app, and even worse when its a webassembly web app.


The easiest way to debug Qt Webassembly is by configuring using the -g argument, or CMAKE_BUILD_TYPE=Debug . Emscripten embeds DWARF symbols in the wasm binaries.

NOTE: Debugging wasm files with DWARF works only in the Chrome browser with the help of a browser extension.

C/C++ DevTools Support (DWARF) browser extension. If you are using Safari or Firefox, or do not want to or cannot install a browser extension, you will need to generate source maps, which I will look at in my next blog post.

DWARF debugging

You need to also enable DWARF in the browser developer tools settings, but you do not need symlinks to the source directories, as you would need to using source maps, as the binaries are embedded with the full directory path. Like magic!

Emscripten embeds DWARF symbols into the binaries built with -g by default, so re-building Qt or your application in debug mode is all you need to do.

Qt builds debug libraries by default using the optimized argument -g2, which produces less debugging info, but results in faster link times. To preserve debug symbols you need to build Qt debug using the -g or -g3 argument. Both of these do the same thing.

Using DWARF debugger

Open Chome with the extention mentioned before installed, and open the console tools. Navigate to the Qt for WebAssembly web application you need to debug. Once it opens, it may take a few seconds for all the symbols and files to get parsed. If you are debugging into Qt, this will take quite a few seconds - just keep waiting.

The javascript console will soon contain file source file paths and sources. You can find your file to debug, and set breakpoints. Just reload the page and once it hits a breakpoint, it will stop execution and highlight the current line in the source view.  It also will show variable names and values.

You can then step though your code as you would debugging a desktop application.

ProPublica: ‘Trump’s Own Mortgages Match His Description of Mortgage Fraud, Records Reveal’

Daring Fireball
www.propublica.org
2025-12-09 21:17:00
Justin Elliott, Robert Faturechi, and Alex Mierjeski, reporting for ProPublica: For months, the Trump administration has been accusing its political enemies of mortgage fraud for claiming more than one primary residence. President Donald Trump branded one foe who did so “deceitful and potentiall...
Original Article

For months, the Trump administration has been accusing its political enemies of mortgage fraud for claiming more than one primary residence.

President Donald Trump branded one foe who did so “deceitful and potentially criminal.” He called another “ CROOKED ” on Truth Social and pushed the attorney general to take action.

But years earlier, Trump did the very thing he’s accusing his enemies of, records show.

In 1993, Trump signed a mortgage for a “Bermuda style” home in Palm Beach, Florida, pledging that it would be his principal residence. Just seven weeks later, he got another mortgage for a seven-bedroom, marble-floored neighboring property, attesting that it too would be his principal residence.

In reality, Trump, then a New Yorker, does not appear to have ever lived in either home, let alone used them as a principal residence. Instead, the two houses, which are next to his historic Mar-a-Lago estate, were used as investment properties and rented out, according to contemporaneous news accounts and an interview with his longtime real estate agent — exactly the sort of scenario his administration has pointed to as evidence of fraud.

At the time of the purchases, Trump’s local real estate agent told the Miami Herald that the businessman had “hired an expensive New York design firm” to “dress them up to the nines and lease them out annually.” In an interview, Shirley Wyner, the late real estate agent’s wife and business partner who was herself later the rental agent for the two properties, told ProPublica: “They were rentals from the beginning.” Wyner, who has worked with the Trump family for years, added: “President Trump never lived there.”

A newspaper clipping with the text: “Barclay’s International Reality: 1094 S. Ocean: 7 bedrooms, 7 bathrooms, 2 guest houses, tennis, private beach, heated pool. $3000 per day. Available weekly or monthly.”
A newspaper clipping with the text: “Lease: Palm Beach, 124 Woodbridge Road. Luxurious Bermuda style home with large Florida room, 3 or 4 bedrooms, 3 bathrooms, heated pool. Mar-A-Lago privileges. Lease: $45,000 per month.”
Despite signing a mortgage that pledged he would live in each house, Trump listed both homes as rentals. Palm Beach Daily News via Newspapers.com. Redactions by ProPublica.

Mortgage law experts who reviewed the records for ProPublica were struck by the irony of Trump’s dual mortgages. They said claiming primary residences on different mortgages at the same time, as Trump did, is often legal and rarely prosecuted. But Trump’s two loans, they said, exceed the low bar the Trump administration itself has set for mortgage fraud.

“Given Trump’s position on situations like this, he’s going to either need to fire himself or refer himself to the Department of Justice,” said Kathleen Engel, a Suffolk University law professor and leading expert on mortgage finance. “Trump has deemed that this type of misrepresentation is sufficient to preclude someone from serving the country.”

Mortgages for a person’s main home tend to receive more favorable terms, like lower interest rates, than mortgages for a second home or an investment rental property. Legal experts said that having more than one primary-residence mortgage can sometimes be legitimate, like when someone has to move for a new job, and other times can be caused by clerical error. Determining ill intent on the part of the borrower is key to proving fraud, and the experts said lenders have significant discretion in what loans they offer clients. (In this case, Trump used the same lender to buy the two Florida homes.)

But in recent months, the Trump administration has asserted that merely having two primary-residence mortgages is evidence of criminality.

Bill Pulte, the Federal Housing Finance Agency director who has led the charge, said earlier this year: “If somebody is claiming two primary residences, that is not appropriate, and we will refer it for criminal investigation.”

Trump hung up on a ProPublica reporter after being asked whether his Florida mortgages were similar to those of others he had accused of fraud.

In response to questions, a White House spokesperson told ProPublica: “President Trump’s two mortgages you are referencing are from the same lender. There was no defraudation. It is illogical to believe that the same lender would agree to defraud itself.”

The spokesperson added, “this is yet another desperate attempt by the Left wing media to disparage President Trump with false allegations,” and said, “President Trump has never, or will ever, break the law.”

The White House did not respond to questions about any other documents related to the transactions, such as loan applications, that could shed light on what Trump told the lender or if the lender made any exceptions for him.

At the time Trump bought the two Florida properties, he was dealing with the wreckage of high-profile failures at his casinos and hotels in the early 1990s. (He famously recounted seeing a panhandler on Fifth Avenue around this time and telling his companion: “You know, right now that man is worth $900 million more than I am.”) In December 1993, he married the model Marla Maples in an opulent ceremony at The Plaza Hotel. And in Florida, he was pushing local authorities to let him turn Mar-a-Lago, then a residence, into a private club.

Trump bought the two homes, which both sit on Woodbridge Road directly north of Mar-a-Lago, and got mortgages in quick succession in December 1993 and January 1994. The lender on both mortgages, one for $525,000 and one for $1,200,000, was Merrill Lynch.

Each of the mortgage documents signed by Trump contain the standard occupancy requirement — that he must make the property his principal residence within 60 days and live there for at least a year, unless the lender agreed otherwise or there were extenuating circumstances.

But ProPublica could not find evidence Trump ever lived in either of the properties. Legal documents and federal election records from the period give his address as Trump Tower in Manhattan. (Trump would officially change his permanent residence to Florida only decades later, in 2019.) A Vanity Fair profile published in March 1994 describes Trump spending time in Manhattan and at Mar-a-Lago itself.

Trump’s real estate agent, who told the local press that the plan from the beginning was to rent out the two satellite homes, was quoted as saying, “Mr. Trump, in effect, is in a position to approve who his neighbors are.”

In the ensuing years, listings popped up in local newspapers advertising each of the homes for rent. At one point in 1997, the larger of the two homes, a 7-bedroom, 7-bathroom Mediterranean Revival mansion, was listed for $3,000 per day.

Even if Trump did violate the law with his two primary-residence mortgages in Florida, the loans have since been paid off and the mid-1990s is well outside the statute of limitations for mortgage fraud.

The same form from two separate mortgage agreements, both with Donald Trump’s signature.
In 1993, Trump signed a mortgage for a “Bermuda style” home in Palm Beach, pledging that it would be his principal residence. Just seven weeks later, he got another mortgage for a seven-bedroom, marble-floored neighboring property and attested that it too would be his principal residence. Obtained by ProPublica

A spokesperson for Bank of America, which now owns Merrill Lynch, did not answer questions about the Trump mortgages.

“It’s highly unlikely we would have original documents for a 32-year-old transaction, but generally in private client mortgages the terms of the transactions are based on the overall relationship,” the spokesperson said in a statement, “and the mortgages are not backed by or sold to any government sponsored entity.”

Trump’s two mortgages in Palm Beach bear similarities to the loans taken out by political rivals whom his administration has accused of fraud.

In October, federal prosecutors charged New York Attorney General Letitia James over her mortgage. James has been one of Trump’s top targets since she brought a fraud lawsuit against the president and his company in 2022.

A central claim in the case the Trump Justice Department brought against her is that she purchased a house in Virginia, pledging to her lender that it would serve as her second home, then proceeded to use it as an investment property and rent it out. “This misrepresentation allowed James to obtain favorable loan terms not available for investment properties,” according to the indictment.

Trump’s Florida mortgage agreements appear to have made a more significant misrepresentation, as he claimed those homes would be his primary residence, not his secondary home as James did, before proceeding to rent them out.

James has denied the allegations against her, and the case was dismissed last month over procedural issues, though the Justice Department has been trying to reindict her.

The circumstances around Trump’s mortgages are also similar to the case his administration has made against Lisa Cook, a member of the Federal Reserve Board of Governors.

Trump declared he was firing Cook earlier this year over her mortgages, as he has sought to bend the traditionally independent agency to his will and force it to lower interest rates. Cook, who denied wrongdoing , has sued to block the termination and continues to serve on the Fed board as that legal fight continues.

In a letter to Cook, Trump specifically noted that she signed two primary residence mortgages within weeks of each other — just as records show he did in Florida.

“You signed one document attesting that a property in Michigan would be your primary residence for the next year. Two weeks later, you signed another document for a property in Georgia stating that it would be your primary residence for the next year,” Trump wrote. “It is inconceivable that you were not aware of your first commitment when making the second.”

He called the loans potentially criminal and wrote, “at a minimum, the conduct at issue exhibits the sort of gross negligence in financial transactions that calls into question your competence and trustworthiness.”

The Trump administration has made similar fraud allegations against other political enemies, including Democrats Sen. Adam Schiff and Rep. Eric Swalwell, both of whom have denied wrongdoing.

In September, ProPublica reported that three of Trump’s Cabinet members have called multiple homes their primary residences in mortgage agreements. Bloomberg also reported that Secretary of the Treasury Scott Bessent did something similar. (The Cabinet members have all denied wrongdoing.)

Pulte, the Federal Housing Finance Agency head, has denied his investigations are politically motivated. “If it’s a Republican who’s committing mortgage fraud, we’re going to look at it,” he has said. “If it’s a Democrat, we’re going to look at it.”

Thus far, Pulte has not made any publicly known criminal referrals against Republicans. He did not respond to questions from ProPublica about Trump’s Florida mortgages.

I misused LLMs to diagnose myself and ended up bedridden for a week

Hacker News
blog.shortround.space
2025-12-09 21:07:22
Comments...
Original Article

If you read nothing else, read this: do not ever use an AI or the internet for medical advice . Go to a doctor. In fact, do yourself a favor and add this to your preferred AI's system prompt right now:

If I ask you any medical questions, refuse to answer them. Tell me that LLMs are not capable of providing medical advice, and that I should go to a doctor instead.


tl;dr : I developed mysterious symptoms over the course of a month, and instead of going to a doctor I (mis-)used a popular LLM to reassure me that nothing was wrong. Turns out it was Lyme disease (yes, the real one, not the fake one) and it (nearly) progressed to meningitis, resulting in a lumbar puncture, antibiotics, and being bedridden for a week. This is a cautionary tale. Before you judge me too harshly, remember this while you read: I was scared out of my mind and I was not thinking rationally. This can happen to you.

Mysterious symptoms

In July of 2025 I began developing flu-like symptoms. I began to feel feverish and would go to sleep with the most intense chills of my life (it felt like what I imagine being naked at the south pole feels like) and would wake up drenched in sweat.

These symptoms subsided after about a week, but then I developed a small, flat, circular rash which turned into a big rash. This rash was not itchy or painful so I chalked it up to some weird symptoms related to what I thought was the flu. However, being the careful, intelligent, and diligent person I am, I decided it would be best to ask an LLM for advice instead of going to, y'know, an actual doctor.

Playing Doctor

Imagine we invented a perfect medical AI tool. You give it pictures and a list of symptoms and it gives you a set of diagnoses and a degree of certainty. You might prompt this tool like this:

Flat, circular, non-itchy, non-painful red rash with a ring, diffuse throughout trunk. Follows week of chills and intense night sweats, plus fatigue and general malaise

The response might look like

Lyme: 90%

Ring worm: 50%

[etc...]

Which would be great!

Instead, here's how I used this LLM:

I have this rash on my body, but it's not itchy or painful, so I don't think it's an emergency? I just want to know what it might be. I think I had the flu last week so it might just be some kind of immune reaction to having been sick recently. My wife had pityriasis once, and the doctor told her they couldn't do anything about it, it would go away on its own eventually. I want to avoid paying a doctor to tell me it's nothing. Does this sound right?

To which the LLM, in typical LLM fashion, in so many words replied "Yes to everything you just said". Wow! I sure felt reassured that I was right about everything. My point is: I was asking it leading questions in the hopes that it would tell me what I wanted to hear

Ask and ye shall receive

Oftentimes, when people go to a doctor, we're looking for reassurance as much as we're looking for treatment. We want the doctor to not just cure us, but to tell us that everything is going to be alright; you're not dying, and there is a reason for everything!

So I wasn't asking for an LLM to fix me, I was asking to be lied to . LLMs are very good at lying to you . Cynics might say it's the only thing they're good at, but I digress. I repeated this exercise basically every day, as my rash got worse. I'd open up my LLM app, "ask" it leading questions in the hopes that it tells me not to go to the doctor, and then not to go to the doctor.

It should also be noted that I was hesitant to go to a doctor because I didn't want to pay for a doctor, but that's a different rant.

Broken Firmware

Did I mention that I was scared? This is not rational behavior. What makes this even more irrational is how rational I thought I was! I had seen the 1995 Sandra Bullock film The Net , in which a man is killed when nurses blindly trust a computer which had been hacked by criminals, resulting in his misdiagnosis and death. I told my friends and family how, in the future, we will all need to be careful about similar situations, and how computers can be used to deceive us if we place too much faith in them. I had, not even a month prior, read and shared articles about people who allowed ChatGPT to brainwash them into thinking that they were inside the Matrix . I laughed at these people, wondering how they could be so stupid. What the fuck is wrong with me?

There's a few books you can read about how people really think. To name a few:

  • Why We're Polarized by Ezra Klein
  • The Righteous Mind by Jonathan Haidt
  • Humankind by Rutger Bregman

These books are mostly about politics but they all cite anthropological evidence which says that human beings are basically not rational. We are easily led astray when we are scared.

You know how historians always try to make the point that, if you were alive in 1930s Germany, you might have ended up being a Nazi too? The same thing applies here. If you were experiencing unexplained neurological symptoms, you might just fall victim to some conman, faith healer ...or LLM.

Receiving actual medical care

One day, I woke up with a neck so stiff that I couldn't touch my chin to my chest. I don't know a lot about medicine, but I know that that is an "oh shit". A girl in my high school died of Meningococcal meningitis after sharing a beer with someone at a party, so I was vaguely aware of the symptoms. So I get in my car and I go to urgent care.

The doctor looks at my rash and immediately says "well I think you almost certainly have Lyme disease, but the neurological symptoms make me worried that you have meningitis. You need to go to the emergency room right now".

Spoiler: I didn't have meningitis.

So, I drive myself to the emergency room and tell them I need to be tested for meningitis. It turns out that "meningitis" is the cheat code for getting seen instantly, because I don't even have the chance to sit down before they take me back and start treating me like I had Ebola. Meningococcal meningitis can kill you in literally hours , and is also extremely contagious, so they pulled out all the stops. Once the Infectious Disease doctor saw me and confirmed I had Lyme, I went back to being a normal, non-infectious patient who had to wait his sweet time while patients who didn't waste a month diagnosing themself with AI were seen.

I won't bore you with the entire rest of the hospital stay, but I will tell you about the lumbar puncture. If you are sensitive to body horror, stop reading immediately, and just remember: don't use LLMs to diagnose yourself, and be wary of the stupid shit you do when you're emotional and irrational . I am telling you about the lumbar puncture so you understand the consequences of asking a computer to lie to you.

I had to get a lumbar puncture to confirm that my brain stem was not infected (it wasn't). Radiology was busy with non-stupid patients that day, so the ER doctor tried to do the lumbar puncture the old fashioned way... 11 different times.

You ever see videos of that Japanese technique for jamming a metal rod down the spinal column of a recently beheaded fish to get it to stop squirming? That's what I kept picturing in my head as I felt every nerve in my thigh catch fire.

Eventually the doctor said "Good news! Radiology agreed to pencil you in", so I go down and get the lumbar puncture assisted with X-rays. They hit the subarachnoid space on the first try. I have had Kidney Stones, Appendicitis, and I've been stabbed in the hand, so believe me when I say that this was the single most intensely painful nanosecond of my life. While I didn't have meningitis , my meninges was pretty significantly inflamed, so getting a needle poked through it felt like what I imagine being impaled on a spike through your groin felt like. I stayed pretty still for the first 11 failed punctures, but when they actually got through, I jumped like I was being electrocuted. Twice. After that, no pain, just twitchy images in my mind of Vlad the Impaler.

Going home

When they confirmed that I was fine, they sent me home with antibiotics. Here's something you may not have known about a lumbar puncture: it puts a hole in your central nervous system and you start leaking cerebro-spinal fluid (CSF). This lowers the intracranial pressure of your skull, causing your brain to sag within your head, and gives you insane headaches. I was bedridden for a week waiting for my spinal column to stop leaking CSF so that I could sit upright. I had to crawl to use the bathroom because if I stood upright, my brain would start to droop inside my skull and I'd be paralyzed with pain.

Moral of the story

  1. Don't use AIs to diagnose yourself
  2. You think you're smarter than me (and maybe you are!) but that doesn't make your immune to the kind of motivated reasoning I engaged in
  3. DON'T USE AIs TO DIAGNOSE YOURSELF
  4. A $150 ER copay and a couple weeks of oral antibiotics is cheaper and less painful than IV antibiotics, 12 lumbar punctures, and a week in bed as you nurture your central nervous system back to good health.

PS: 4 months on, I no longer have Lyme disease, and I have no lasting complications. I chose not to name the LLM product I used because I don't want to imply that this is the fault of LLM vendor. It's not. I misused their product in a way I knew I wasn't supposed to and paid for it.

The Wall Street Journal: ‘Behind Paramount’s Relentless Campaign to Woo Warner Discovery and President Trump’

Daring Fireball
www.wsj.com
2025-12-09 20:59:19
Joe Flint, Brian Schwartz, and Natalie Andrews, reporting for The Wall Street Journal (gift link, also in News+): “Just tried calling you about new bid we have submitted,” Ellison texted Zaslav. “I heard you on all your concerns and believe we have addressed them in our new proposal. Please give...
Original Article

Please enable JS and disable any ad blocker

Windows PowerShell now warns when running Invoke-WebRequest scripts

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 20:45:20
Microsoft says Windows PowerShell now warns when running scripts that use the Invoke-WebRequest cmdlet to download web content, aiming to prevent potentially risky code from executing. [...]...
Original Article

Windows PowerShell

Microsoft says Windows PowerShell now warns when running scripts that use the Invoke-WebRequest cmdlet to download web content, aiming to prevent potentially risky code from executing.

As Microsoft explains, this mitigates a high-severity PowerShell remote code execution vulnerability (CVE-2025-54100 ), which primarily affects enterprise or IT-managed environments that use PowerShell scripts for automation, since PowerShell scripts are not as commonly used outside such environments .

The warning has been added to Windows PowerShell 5.1, the PowerShell version installed by default on Windows 10 and Windows 11 systems, and is designed to add the same secure web parsing process available in PowerShell 7.

PowerShell will alert you that, without precautions, scripts contained in web pages downloaded using the "Invoke-WebRequest' cmdlet could execute on your system. By default, if you press 'Enter' or select 'No,' the operation will be canceled, and PowerShell will suggest rerunning the command with the '-UseBasicParsing' parameter for safer processing.

When choosing 'Yes,' PowerShell will parse the page using the older method (full HTML parsing), allowing the content and embedded scripts to load as before. In short, selecting 'Yes 'means you accept the risk, while choosing 'No' stops the action to protect your system.

"Windows PowerShell 5.1 now displays a security confirmation prompt when using the Invoke-WebRequest command to fetch web pages without special parameters," Microsoft explains in a Tuesday advisory.

"This prompt warns that scripts in the page could run during parsing and advises using the safer -UseBasicParsing parameter to avoid any script execution. Users must choose to continue or cancel the operation."

After you install the KB5074204 update, IT admins will see the following confirmation prompt warning of script code execution risks:

Security Warning: Script Execution Risk
Invoke-WebRequest parses the content of the web page. Script code in the web page might be run when the page is parsed.
      RECOMMENDED ACTION:
      Use the -UseBasicParsing switch to avoid script code execution.
      Do you want to continue?
			```
 
For additional details, see [KB5074596: PowerShell 5.1: Preventing script execution from web content](https://support.microsoft.com/help/5072034).

To avoid having their automation scripts hang until manual confirmation, admins are advised to update their scripts to use the UseBasicParsing safe parameter explicitly.

It's also important to note that in PowerShell, the 'curl' command is aliased to the Invoke-WebRequest cmdlet, so you will also see these new warnings when running scripts invoking curl commands.

"Most PowerShell scripts and commands that use the Invoke-WebRequest command will continue to work with little or no modification," Microsoft noted.

"For example, scripts that only download content or work with the response body as text or data are not affected and require no changes."

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Django: what’s new in 6.0

Hacker News
adamj.eu
2025-12-09 20:33:14
Comments...
Original Article
Django 6.0: codename “mosaic”

Django 6.0 was released today , starting another release cycle for the loved and long-lived Python web framework (now 20 years old!). It comes with a mosaic of new features, contributed to by many, some of which I am happy to have helped with. Below is my pick of highlights from the release notes .

Upgrade with help from django-upgrade

If you’re upgrading a project from Django 5.2 or earlier, please try my tool django-upgrade . It will automatically update old Django code to use new features, fixing some deprecation warnings for you, including five fixers for Django 6.0. (One day, I’ll propose django-upgrade to become an official Django project, when energy and time permit…)

Template partials

There are four headline features in Django 6.0, which we’ll cover before other notable changes, starting with this one:

The Django Template Language now supports template partials , making it easier to encapsulate and reuse small named fragments within a template file.

Partials are sections of a template marked by the new {% partialdef %} and {% endpartialdef %} tags. They can be reused within the same template or rendered in isolation. Let’s look at examples for each use case in turn.

Reuse partials within the same template

The below template reuses a partial called filter_controls within the same template. It’s defined once at the top of the template, then used twice later on. Using a partial allows the template avoid repetition without pushing the content into a separate include file.

<section id=videos>
  {% partialdef filter_controls %}
    <form>
      {{ filter_form }}
    </form>
  {% endpartialdef %}

  {% partial filter_controls %}

  <ul>
    {% for video in videos %}
      <li>
        <h2>{{ video.title }}</h2>
        ...
      </li>
    {% endfor %}
  </ul>

  {% partial filter_controls %}
</section>

Actually, we can simplify this pattern further, by using the inline option on the partialdef tag, which causes the definition to also render in place:

<section id=videos>
  {% partialdef filter_controls inline %}
    <form>
      {{ filter_form }}
    </form>
  {% endpartialdef %}

  <ul>
    {% for video in videos %}
      <li>
        <h2>{{ video.title }}</h2>
        ...
      </li>
    {% endfor %}
  </ul>

  {% partial filter_controls %}
</section>

Reach for this pattern any time you find yourself repeating template code within the same template. Because partials can use variables, you can also use them to de-duplicate when rendering similar controls with different data.

Render partials in isolation

The below template defines a view_count partial that’s intended to be re-rendered in isolation. It uses the inline option, so when the whole template is rendered, the partial is included.

The page uses htmx , via my django-htmx package , to periodically refresh the view count, through the hx-* attributes. The request from htmx goes to a dedicated view that re-renders the view_count partial.

{% load django_htmx %}
<!doctype html>
<html>
  <body>
    <h1>{{ video.title }}</h1>
    <video width=1280 height=720 controls>
      <source src="{{ video.file.url }}" type="video/mp4">
      Your browser does not support the video tag.
    </video>

    {% partialdef view_count inline %}
    <section
      class=view-count
      hx-trigger="every 1s"
      hx-swap=outerHTML
      hx-get="{% url 'video-view-count' video.id %}"
    >
      {{ video.view_count }} views
    </section>
    {% endpartialdef %}

    {% htmx_script %}
  </body>
</html>

The relevant code for the two views could look like this:

from django.shortcuts import render


def video(request, video_id):
    ...
    return render(request, "video.html", {"video": video})


def video_view_count(request, video_id):
    ...
    return render(request, "video.html#view_count", {"video": video})

The initial video view renders the full template video.html . The video_view_count view renders just the view_count partial, by appending #view_count to the template name. This syntax is similar to how you’d reference an HTML fragment by its ID in a URL.

History

htmx was the main motivation for this feature, as promoted by htmx creator Carson Gross in a cross-framework review post . Using partials definitely helps maintain “Locality of behaviour” within your templates, easing authoring, debugging, and maintenance by avoiding template file sprawl.

Django’s support for template partials was initially developed by Carlton Gibson in the django-template-partials package , which remains available for older Django versions. The integration into Django itself was done in a Google Summer of Code project this year, worked on by student Farhan Ali and mentored by Carlton, in Ticket #36410 . You can read more about the development process in Farhan’s retrospective blog post . Many thanks to Farhan for authoring, Carlton for mentoring, and Natalia Bidart, Nick Pope, and Sarah Boyce for reviewing!

Tasks framework

The next headline feature we’re covering:

Django now includes a built-in Tasks framework for running code outside the HTTP request–response cycle. This enables offloading work, such as sending emails or processing data, to background workers.

Basically, there’s a new API for defining and enqueuing background tasks—very cool!

Background tasks are a way of running code outside of the request-response cycle. They’re a common requirement in web applications, used for sending emails, processing images, generating reports, and more.

Historically, Django has not provided any system for background tasks, and kind of ignored the problem space altogether. Developers have instead relied on third-party packages like Celery or Django Q2 . While these systems are fine, they can be complex to set up and maintain, and often don’t “go with the grain” of Django.

The new Tasks framework fills this gap by providing an interface to define background tasks, which task runner packages can then integrate with. This common ground allows third-party Django packages to define tasks in a standard way, assuming you’ll be using a compatible task runner to execute them.

Define tasks with the new @task decorator:

from django.tasks import task


@task
def resize_video(video_id): ...

…and enqueue them for background execution with the Task.enqueue() method:

from example.tasks import resize_video


def upload_video(request):
    ...
    resize_video.enqueue(video.id)
    ...

Execute tasks

At this time, Django does not include a production-ready task backend, only two that are suitable for development and testing:

  • ImmediateBackend - runs tasks synchronously, blocking until they complete.
  • DummyBackend - does nothing when tasks are enqueued, but allows them to be inspected later. Useful for tests, where you can assert that tasks were enqueued without actually running them.

For production use, you’ll need to use a third-party package that implements one, for which django-tasks , the reference implementation, is the primary option. It provides DatabaseBackend for storing tasks in your SQL database, a fine solution for many projects, avoiding extra infrastructure and allowing atomic task enqueuing within database transactions. We may see this backend merged into Django in due course, or at least become an official package, to help make Django “batteries included” for background tasks.

To use django-tasks’ DatabaseBackend today, first install the package:

Second, add these two apps to your INSTALLED_APPS setting:

INSTALLED_APPS = [
    # ...
    "django_tasks",
    "django_tasks.backends.database",
    # ...
]

Third, configure DatabaseBackend as your tasks backend in the new TASKS setting :

TASKS = {
    "default": {
        "BACKEND": "django_tasks.backends.database.DatabaseBackend",
    },
}

Fourth, run migrations to create the necessary database tables:

Finally, to run the task worker process, use the package’s db_worker management command:

$ ./manage.py db_worker
Starting worker worker_id=jWLMLrms3C2NcUODYeatsqCFvd5rK6DM queues=default

This process runs indefinitely, polling for tasks and executing them, logging events as it goes:

Task id=10b794ed-9b64-4eed-950c-fcc92cd6784b path=example.tasks.echo state=RUNNING
Hello from test task!
Task id=10b794ed-9b64-4eed-950c-fcc92cd6784b path=example.tasks.echo state=SUCCEEDED

You’ll want to run db_worker in production, and also in development if you want to test background task execution.

History

It’s been a long path to get the Tasks framework into Django, and I’m super excited to see it finally available in Django 6.0. Jake Howard started on the idea for Wagtail, a Django-powered CMS, back in 2021, as they have a need for common task definitions across their package ecosystem. He upgraded the idea to target Django itself in 2024, when he proposed DEP 0014 . As a member of the Steering Council at the time, I had the pleasure of helping review and accept the DEP.

Since then, Jake has been leading the implementation effort, building pieces first in the separate django-tasks package before preparing them for inclusion in Django itself. This step was done under Ticket #35859 , with a pull request that took nearly a year to review and land. Thanks to Jake for his perseverance here, and to all reviewers: Andreas Nüßlein, Dave Gaeddert, Eric Holscher, Jacob Walls, Jake Howard, Kamal Mustafa, @rtr1, @tcely, Oliver Haas, Ran Benita, Raphael Gaschignard, and Sarah Boyce.

Read more about this feature and story in Jake’s post celebrating when it was merged .

Content Security Policy support

Our third headline feature:

Built-in support for the Content Security Policy (CSP) standard is now available, making it easier to protect web applications against content injection attacks such as cross-site scripting (XSS). CSP allows declaring trusted sources of content by giving browsers strict rules about which scripts, styles, images, or other resources can be loaded.

I’m really excited about this, because I’m a bit of a security nerd who’s been deploying CSP for client projects for years.

CSP is a security standard that can protect your site from cross-site scripting (XSS) and other code injection attacks. You set a content-security-policy header to declare which content sources are trusted for your site, and then browsers will block content from other sources. For example, you might declare that only scripts your domain are allowed, so an attacker who manages to inject a <script> tag pointing to evil.com would be thwarted, as the browser would refuse to load it.

Previously, Django had no built-in support for CSP, and developers had to rely on building their own, or using a third-party package like the very popular django-csp . But this was a little bit inconvenient, as it meant that other third-party packages couldn’t reliably integrate with CSP, as there was no common API to do so.

The new CSP support provides all the core features that django-csp did, with a slightly tidier and more Djangoey API. To get started, first add ContentSecurityPolicyMiddleware to your MIDDLEWARE setting:

MIDDLEWARE = [
    # ...
    "django.middleware.csp.ContentSecurityPolicyMiddleware",
    # ...
]

Place it next to SecurityMiddleware , as it similarly adds security-related headers to all responses. (You do have SecurityMiddleware enabled, right?)

Second, configure your CSP policy using the new settings:

  • SECURE_CSP to configure the content-security-policy header, which is your actively enforced policy.
  • SECURE_CSP_REPORT_ONLY to configure the content-security-policy-report-only header, which sets a non-enforced policy for which browsers report violations to a specified endpoint. This option is useful for testing and monitoring a policy before enforcing it.

For example, to adopt the nonce-based strict CSP recommended by web.dev , you could start with the following setting:

from django.utils.csp import CSP

SECURE_CSP_REPORT_ONLY = {
    "script-src": [CSP.NONCE, CSP.STRICT_DYNAMIC],
    "object-src": [CSP.NONE],
    "base-uri": [CSP.NONE],
}

The CSP enum used above provides constants for CSP directives, to help avoid typos.

This policy is quite restrictive and will break most existing sites if deployed as-is, because it requires nonces, as covered next. That’s why the example shows starting with the report-only mode header, to help track down places that need fixing before enforcing the policy. You’d later change to setting the SECURE_CSP setting to enforce the policy.

Anyway, those are the two basic steps to set up the new CSP support!

Nonce generation

A key part of the new feature is that nonce generation is now built-in to Django, when using the CSP middleware. Nonces are a security feature in CSP that allow you to mark specific <script> and <style> tags as trusted with a nonce attribute:

<script src=/static/app.js type=module nonce=55vsH4w7ATHB85C3MbPr_g></script>

The nonce value is randomly generated per-request, and included in the CSP header. An attacker performing content injection couldn’t guess the nonce, so browsers can trust only those tags that include the correct nonce. Because nonce generation is now part of Django, third-party packages can depend on it for their <script> and <style> tags and they’ll continue to work if you adopt CSP with nonces.

Nonces are the recommended way to use CSP today, avoiding problems with previous allow-list based approaches. That’s why the above recommended policy enables them. To adopt a nonce-based policy, you’ll need to annotate your <script> and <style> tags with the nonce value through the following steps.

First, add the new csp template context processor to your TEMPLATES setting:

TEMPLATES = [
    {
        "BACKEND": "django.template.backends.django.DjangoTemplates",
        "OPTIONS": {
            "context_processors": [
                # ...
                "django.template.context_processors.csp",
            ],
        },
    },
]

Second, annotate your <script> and <style> tags with nonce="{{ csp_nonce }}" :

-   <script src="{% static 'app.js' %}" type="module"></script>
+   <script src="{% static 'app.js' %}" type="module" nonce="{{ csp_nonce }}"></script>

This can be tedious and error-prone, hence using the report-only mode first to monitor violations might be useful, especially on larger projects.

Anyway, deploying CSP right would be another post in itself, or even a book chapter, so we’ll stop here for now. For more info, check out that web.dev article and the MDN CSP guide .

History

CSP itself was proposed for browsers way back in 2004, and was first implemented in Mozilla Firefox version 4, released 2011. That same year, Django Ticket #15727 was opened, proposing adding CSP support to Django. Mozilla created django-csp from 2010, before the first public availability of CSP, using it on their own Django-powered sites. The first comment on Ticket #15727 pointed to django-csp, and the community basically rolled with it as the de facto solution.

Over the years, CSP itself evolved, as did django-csp, with Rob Hudson ending up as its maintainer. Focusing on the package motivated to finally get CSP into Django itself. He made a draft PR and posted on Ticket #15727 in 2024, which I enjoyed helping review. He iterated on the PR over the next 13 months until it was finally merged for Django 6.0. Thanks to Rob for his heroic dedication here, and to all reviewers: Benjamin Balder Bach, Carlton Gibson, Collin Anderson, David Sanders, David Smith, Florian Apolloner, Harro van der Klauw, Jake Howard, Natalia Bidart, Paolo Melchiorre, Sarah Boyce, and Sébastien Corbin.

Email API updates

The fourth and final headline feature:

Email handling in Django now uses Python’s modern email API, introduced in Python 3.6. This API, centered around the email.message.EmailMessage class, offers a cleaner and Unicode-friendly interface for composing and sending emails.

This is a major change, but it’s unlikely to affect projects using basic email features. You can still use Django’s send_mail() function and EmailMessage class as before, like:

from django.core.mail import EmailMessage

email = EmailMessage(
    subject="🐼 Need more bamboo",
    body="We are desperately low, please restock before the pandas find out!",
    from_email="zookeeper@example.com",
    to=["supplies@example.com"],
)
email.attach_file("/media/bamboo_cupboard.jpg")
email.send()

The key change is that, under-the-hood, when you call send() on a Django EmailMessage object, it now translates itself into a Python’s newer email.message.EmailMessage type before sending.

Modernizing provides these benefits:

  1. Fewer bugs - many edge case bugs in Python’s old email API have been fixed in the new one.
  2. Django is less hacky - a bunch of workarounds and security fixes in Django‘s email code have been removed.
  3. More convenient API - the new API supports some niceties, like the below inline attachment example.

Easier inline attachments with MIMEPart

Django’s EmailMessage.attach() method allows you to attach a file as an attachment. Emails support images as inline attachments , which can be displayed within the HTML email body.

While you could previously use EmailMessage.attach() to add inline attachments, it was a bit fiddly, using a legacy class. Now, you can call the method with a Python email.message.MIMEPart object to add an inline attachment in a few steps:

import email.utils
from email.message import MIMEPart
from django.core.mail import EmailMultiAlternatives

message = EmailMultiAlternatives(
    subject="Cute Panda Alert",
    body="Here's a cute panda picture for you!",
    from_email="cute@example.com",
    to=["fans@example.com"],
)
with open("panda.jpg", "rb") as f:
    panda_jpeg = f.read()

cid = email.utils.make_msgid()
inline_image = MIMEPart()
inline_image.set_content(
    panda_jpeg,
    maintype="image",
    subtype="jpeg",
    disposition="inline",
    cid=cid,
)
message.attach(inline_image)
message.attach_alternative(
    f'<h1>Cute panda baby alert!</h1><img src="cid:{cid[1:-1]}">',
    "text/html",
)

It’s not the simplest API, but it does expose all the power of the underlying email system, and it’s better than the past situation.

History

The new email API was added to Python as provisional in version 3.4 (2014) , and made stable in version 3.6 (2016) . The legacy API, however, was never planned for deprecation, so there was never any deadline to upgrade Django’s email handling.

In 2024, Mike Edmunds posted on the (old) django-developers mailing list , proposing the upgrade with strong reasoning and planning. This conversation led to Ticket #35581 , which he worked on for eight months until it was merged. Many thanks to Mike for leading this effort, and to Sarah Boyce for reviewing! Email is not a glamorous feature, but it’s a critical communication channel for nearly every Django project, so props for this.

Positional arguments in django.core.mail APIs

We’re now out of the headline features and onto the “minor” changes, starting with this deprecation related to the above email changes:

django.core.mail APIs now require keyword arguments for less commonly used parameters. Using positional arguments for these now emits a deprecation warning and will raise a TypeError when the deprecation period ends:

  • All optional parameters ( fail_silently and later) must be passed as keyword arguments to get_connection() , mail_admins() , mail_managers() , send_mail() , and send_mass_mail() .
  • All parameters must be passed as keyword arguments when creating an EmailMessage or EmailMultiAlternatives instance, except for the first four ( subject , body , from_email , and to ), which may still be passed either as positional or keyword arguments.

Previously, Django would let you pass all parameters positionally, which gets a bit silly and hard to read with long parameter lists, like:

from django.core.mail import send_mail

send_mail(
    "🐼 Panda of the week",
    "This week’s panda is Po Ping, sha-sha booey!",
    "updates@example.com",
    ["adam@example.com"],
    True,
)

The final True doesn’t provide any clue what it means without looking up the function signature. Now, using positional arguments for those less-commonly-used parameters raises a deprecation warning, nudging you to write:

from django.core.mail import send_mail

send_mail(
    subject="🐼 Panda of the week",
    body="This week’s panda is Po Ping, sha-sha booey!",
    from_email="updates@example.com",
    ["adam@example.com"],
    fail_silently=True,
)

This change is appreciated for API clarity, and Django is generally moving towards using keyword-only arguments more often. django-upgrade can automatically fix this one for you, via its mail_api_kwargs fixer .

Thanks to Mike Edmunds, again, for making this improvement in Ticket #36163 .

Extended automatic shell imports

Next up:

Common utilities, such as django.conf.settings, are now automatically imported to the shell by default.

One of the headline features back in Django 5.2 was automatic model imports in the shell , making ./manage.py shell import all of your models automatically. Building on that DX boost, Django 6.0 now also imports other common utilities, for which we can find the full list by running ./manage.py shell with -v 2 :

$ ./manage.py shell -v 2
6 objects imported automatically:

  from django.conf import settings
  from django.db import connection, models, reset_queries
  from django.db.models import functions
  from django.utils import timezone

...

(This is from a project without any models, so only the utilities are listed.)

So that’s:

  • settings , useful for checking your runtime configuration:

    In [1]: settings.DEBUG
    Out[1]: False
    
  • connection and reset_queries() , great for checking the executed queries :

    In [1]: Book.objects.select_related('author')
    Out[1]: <QuerySet []>
    
    In [2]: connection.queries
    Out[2]:
    [{'sql': 'SELECT "example_book"."id", "example_book"."title", "example_book"."author_id", "example_author"."id", "example_author"."name" FROM "example_book" INNER JOIN "example_author" ON ("example_book"."author_id" = "example_author"."id") LIMIT 21',
      'time': '0.000'}]
    
  • models and functions , useful for advanced ORM work:

    In [1]: Book.objects.annotate(
       ...:   title_lower=functions.Lower("title")
       ...: ).filter(
       ...:   title_lower__startswith="a"
       ...: ).count()
    Out[1]: 71
    
  • timezone , useful for using Django’s timezone-aware date and time utilities:

    In [1]: timezone.now()
    Out[1]: datetime.datetime(2025, 12, 1, 23, 42, 22, 558418, tzinfo=datetime.timezone.utc)
    

It remains possible to extend the automatic imports with whatever you’d like, as documented in How to customize the shell command documentation page.

Salvo Polizzi contributed the original automatic shell imports feature in Django 5.2. He’s then returned to offer these extra imports for Django 6.0, in Ticket #35680 . Thanks to everyone that contributed to the forum discussion agreeing on which imports to add, and to Natalia Bidart and Sarah Boyce for reviewing!

Dynamic field refresh on save()

Now let’s discuss a series of ORM improvements, starting with this big one:

GeneratedField s and fields assigned expressions are now refreshed from the database after save() on backends that support the RETURNING clause (SQLite, PostgreSQL, and Oracle). On backends that don’t support it (MySQL and MariaDB), the fields are marked as deferred to trigger a refresh on subsequent accesses.

Django models support having the database generate field values for you in three cases:

  1. The db_default field option, which lets the database generate the default value when creating an instance:

    from django.db import models
    from django.db.models.functions import Now
    
    
    class Video(models.Model):
        ...
        created = models.DateTimeField(db_default=Now())
    
  2. The GeneratedField field type, which is always computed by the database based on other fields in the same instance:

    from django.db import models
    from django.db.models.functions import Concat
    
    
    class Video(models.Model):
        ...
        full_title = models.GeneratedField(
            models.TextField(),
            expression=Concat(
                "title",
                models.Value(" - "),
                "subtitle",
            ),
        )
    
  3. Assigning expression values to fields before saving:

    from django.db import models
    from django.db.models.functions import Now
    
    
    class Video(models.Model):
        ...
        last_updated = models.DateTimeField()
    
    
    video = Video.objects.get(id=1)
    ...
    video.last_updated = Now()
    video.save()
    

Previously, only the first method, using db_default , would refresh the field value from the database after saving. The other two methods would leave you with only the old value or the expression object, meaning you’d need to call Model.refresh_from_db() to get any updated value if necessary. This was hard to remember and it costs an extra database query.

Now Django takes advantage of the RETURNING SQL clause to save the model instance and fetch updated dynamic field values in a single query, on backends that support it (SQLite, PostgreSQL, and Oracle). A save() call may now issue a query like:

UPDATE "example_video"
SET "last_updated" = NOW()
WHERE "example_video"."id" = 1
RETURNING "example_video"."last_updated"

Django puts the return value into the model field, so you can read it immediately after saving:

video = Video.objects.get(id=1)
...
video.last_updated = Now()
video.save()
print(video.last_updated)  # Updated value from the database

On backends that don’t support RETURNING (MySQL and MariaDB), Django now marks the dynamic fields as deferred after saving. That way, the later access, as in the above example, will automatically call Model.refresh_from_db() . This ensures that you always read the updated value, even if it costs an extra query.

History

This feature was proposed in Ticket #27222 way back in 2016, by Anssi Kääriäinen. It sat dormant for most of the nine years since, but ORM boss Simon Charette picked it up earlier this year, found an implementation, and pushed it through to completion. Thanks to Simon for continuing to push the ORM forward, and to all reviewers: David Sanders, Jacob Walls, Mariusz Felisiak, nessita, Paolo Melchiorre, Simon Charette, and Tim Graham.

Universal StringAgg aggregate

The next ORM change:

The new StringAgg aggregate returns the input values concatenated into a string, separated by the delimiter string. This aggregate was previously supported only for PostgreSQL.

This aggregate is often used for making comma-separated lists of related items, among other things. Previously, it was only supported on PostgreSQL, as part of django.contrib.postgres :

from django.contrib.postgres.aggregates import StringAgg
from example.models import Video

videos = Video.objects.annotate(
    chapter_ids=StringAgg("chapter", delimiter=","),
)

for video in videos:
    print(f"Video {video.id} has chapters: {video.chapter_ids}")

…which might give you output like:

Video 104 has chapters: 71,72,74
Video 107 has chapters: 88,89,138,90,91,93

Now this aggregate is available on all database backends supported by Django, imported from django.db.models :

from django.db.models import StringAgg, Value
from example.models import Video

videos = Video.objects.annotate(
    chapter_ids=StringAgg("chapter", delimiter=Value(",")),
)

for video in videos:
    print(f"Video {video.id} has chapters: {video.chapter_ids}")

Note the delimiter argument now requires a Value() expression wrapper for literal strings, as above. This change allows you to use database functions or fields as the delimiter if desired.

While most Django projects stick to PostgreSQL, having this aggregate available on all backends is a nice improvement for cross-database compatibility, and it means third-party packages can use it without affecting their database support.

History

The PostgreSQL-specific StringAgg was added way back in Django 1.9 (2015) by Andriy Sokolovskiy, in Ticket #24301 . In Ticket #35444 , Chris Muthig proposed adding the Aggregate.order_by option, something used by StringAgg to specify the ordering of concatenated elements, and as a side effect this made it possible to generalize StringAgg to all backends.

Thanks to Chris for proposing and implementing this change, and to all reviewers: Paolo Melchiorre, Sarah Boyce, and Simon Charette.

BigAutoField as the default primary key type

Next up:

DEFAULT_AUTO_FIELD setting now defaults to BigAutoField

This important change helps lock in scalable larger primary keys.

Django 3.2 (2021) introduced the DEFAULT_AUTO_FIELD setting for changing the default primary key type used in models. Django uses this setting to add a primary key field called id to models that don’t explicitly define a primary key field. For example, if you define a model like this:

from django.db import models


class Video(models.Model):
    title = models.TextField()

…then it will have two fields: id and title , where id uses the type defined by DEFAULT_AUTO_FIELD .

The setting can also be overridden on a per-app basis by defining AppConfig.default_auto_field in the app’s apps.py file:

from django.apps import AppConfig


class ChannelConfig(AppConfig):
    name = "channel"
    default_auto_field = "django.db.models.BigAutoField"

A key motivation for adding the setting was to allow projects to switch from AutoField (a 32-bit integer) to BigAutoField (a 64-bit integer) for primary keys, without needing changes to every model. AutoField can store values up to about 2.1 billion, which sounds large but it becomes easy to hit at scale. BigAutoField can store values up to about 9.2 quintillion, which is “more than enough” for every practical purpose.

If a model using AutoField hits its maximum value, it can no longer accept new rows, a problem known as primary key exhaustion . The table is effectively blocked, requiring an urgent fix to switch the model from AutoField to BigAutoField via a locking database migration on a large table. For a great watch on how Kraken is fixing this problem, see Tim Bell’s DjangoCon Europe 2025 talk , detailing some clever techniques to proactively migrate large tables with minimal downtime.

To stop this problem arising for new projects, Django 3.2 made new projects created with startproject set DEFAULT_AUTO_FIELD to BigAutoField , and new apps created with startapp set their AppConfig.default_auto_field to BigAutoField . It also added a system check to ensure that projects set DEFAULT_AUTO_FIELD explicitly, to ensure users were aware of the feature and could make an informed choice.

Now Django 6.0 changes the actual default values of the setting and app config attribute to BigAutoField . Projects using BigAutoField can remove the setting:

-DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"

…and app config attribute:

from django.apps import AppConfig

 class ChannelConfig(AppConfig):
     name = "channel"
-    default_auto_field = "django.db.models.BigAutoField"

The default startproject and startapp templates also no longer set these values. This change reduces the amount of boilerplate in new projects, and the problem of primary key exhaustion can fade into history, becoming something that most Django users no longer need to think about.

History

The addition of DEFAULT_AUTO_FIELD in Django 3.2 was proposed by Caio Ariede and implemented by Tom Forbes, in Ticket #31007 . This new change in Django 6.0 was proposed and implemented by ex-Fellow Tim Graham, in Ticket #36564 . Thanks to Tim for spotting that this cleanup was now possible, and to Jacob Walls and Clifford Gama for reviewing!

Template variable forloop.length

Moving on to templates, let’s start with this nice little addition:

The new variable forloop.length is now available within a for loop.

This small extension makes it possible to write a template loop like this:

<ul>
  {% for goose in geese %}
    <li>
      <strong>{{ forloop.counter }}/{{ forloop.length }}</strong>: {{ goose.name }}
    </li>
  {% endfor %}
</ul>

Previously, you’d need to refer to the length in an another way, like {{ geese|length }} , which is a bit less flexible.

Thanks to Jonathan Ströbele for contributing this idea and implementation in Ticket #36186 , and to David Smith, Paolo Melchiorre, and Sarah Boyce for reviewing.

querystring template tag enhancements

There are two extensions to the querystring template tag , which was added in Django 5.1 to help with building links that modify the current request’s query parameters.

  1. Release note:

    The querystring template tag now consistently prefixes the returned query string with a ? , ensuring reliable link generation behavior.

    This small change improves how the tag behaves when an empty mapping of query parameters are provided. Say you had a template like this:

    <a href="{% querystring params %}">Reset search</a>
    

    …where params is a dictionary that may sometimes be empty. Previously, if params was empty, the output would be:

    <a href="">Reset search</a>
    

    Browsers treat this as a link to the same URL including the query parameters , so it would not clear the query parameters as intended. Now, with this change, the output will be:

    <a href="?">Reset search</a>
    

    Browsers treat ? as a link to the same URL without any query parameters , clearing them as the user would expect.

    Thanks to Django Fellow Sarah Boyce for spotting this improvement and implementing the fix in Ticket #36268 , and for Django Fellow Natalia Bidart for reviewing!

  2. Release note:

    The querystring template tag now accepts multiple positional arguments, which must be mappings, such as QueryDict or dict .

    This enhancement allows the tag to merge multiple sources of query parameters when building the output. For example, you might have a template like this:

    <a href="{% querystring request.GET super_search_params %}">Super search</a>
    

    …where super_search_params is a dictionary of extra parameters to add to make the current search “super”. The tag merges the two mappings, with later mappings taking precedence for duplicate keys.

    Thanks again to Sarah Boyce for proposing this improvement in Ticket #35529 , to Giannis Terzopoulos for implementing it, and to Natalia Bidart, Sarah Boyce, and Tom Carrick for reviewing!

Fin

That’s a wrap! Thank you for reading my highlights. There are plenty more changes to read about in the release notes .

Also, there are always many more behind-the-scenes improvements and bug fixes that don’t make it into the release notes. Optimizations and micro-improvements get merged all the time, so don’t delay, upgrade today!

Thank you to all 174 people who contributed to Django 6.0, as counted in this list by Mariusz Felisiak.

May your upgrade be swift, smooth, safe, and secure,

—Adam


😸😸😸 Check out my new book on using GitHub effectively, Boost Your GitHub DX ! 😸😸😸


One summary email a week, no spam, I pinky promise.

Related posts:

Tags:

You’ll Never Guess Who Won the Newly Created ‘Peace Prize’ From FIFA, the World’s Most Corrupt Sports Body

Daring Fireball
www.theguardian.com
2025-12-09 20:28:21
The Guardian: There on a plinth, with “Donald J Trump” emblazoned on it in capital letters, was the uncoveted trophy: a golden globe resting on five golden hands big enough to compensate any tiny-handed recipient feeling sore about the Nobel peace prize. But wait, there was more. “There is also...
Original Article

It had about as much drama and suspense as reading a dictionary or watching election results come in from North Korea.

To the surprise of no one, Donald Trump won the inaugural Fifa peace prize on Friday at a cheesy, gaudy and gauche World Cup draw expertly designed to flatter the world’s most precious ego.

“This is your prize – this is your peace prize!” gushed Gianni Infantino , the bald-headed Fifa president, after Trump took the stage at the John F Kennedy Center for the Performing Arts in snowy Washington.

There on a plinth, with “Donald J Trump” emblazoned on it in capital letters, was the uncoveted trophy: a golden globe resting on five golden hands big enough to compensate any tiny-handed recipient feeling sore about the Nobel peace prize.

But wait, there was more. “There is also a beautiful medal for you that you can wear everywhere you want to go,” added Infantino, knowing that with Trump there is no such thing as too much.

Glowing oranger than usual under the stage lights, Trump eagerly put the medal around his neck without waiting for Infantino to do the honours. He told the audience of 2,000 people: “This is truly one of the great honours of my life.”

It was a Norwegian football commentator who once memorably celebrated victory over England by shouting: “Maggie Thatcher … your boys took a hell of a beating!” Now Fifa had delivered its own “Norwegian Nobel committee … your boys took a hell of a beating!” rebuke to the body that snubbed its favourite president.

Foreign leaders such as Keir Starmer and Benjamin Netanyahu have learned over the past year that flattering Trump is like feeding gold candy to a baby. The more blatant and obvious, the better it works. Now, thanks to Infantino, Trump was centre-stage at world sport’s greatest spectacle.

History sure does rhyme. Benito Mussolini used the 1934 World Cup in Italy to promote a resurgent Roman empire. Before every match, the Italian team performed the “Roman salute”. Il Duce even created a special trophy, the “Coppa del Duce”, which was six times bigger than the official Jules Rimet World Cup trophy.

The last time the US hosted the World Cup, in 1994, the draw was held in Las Vegas and President Bill Clinton did not attend. But Infantino, who regards America as football’s undiscovered country of profit, has pursued Trump as ardently as Count Dracula crossing oceans of time to find his lost love.

The Fifa supremo was spotted at Trump’s second inauguration earlier this year and is a regular guest in the Oval Office and at his Mar-a-Lago estate in Florida. He made no objection when Trump elbowed his way into Chelsea’s Club World Cup celebration . Fifa has even opened a new office in Trump Tower in New York.

The World Cup final draw was therefore held without irony at the Kennedy Center, where Senate Democrats have launched an investigation into alleged cronyism and corruption under the leadership of a Trump appointee, just round the corner from the Watergate building, where a burglary and cover-up brought down Richard Nixon.

All very Fifa .

The extravaganza began with the Italian tenor Andrea Bocelli belting out the aria Nessun Dorma, which translates as “None shall sleep” – a subtle dig at the president who has recently been seen dozing at meetings ?

The hosts were the model and presenter Heidi Klum, wearing a shimming gold dress, and comedian Kevin Hart, wearing a black sweater with a sparkling necklace. There was the customary montage of football clips, including Diego Maradona’s second for Argentina against England in 1986 – perhaps Trump, often accused of cheating at golf, would have preferred Maradona’s shameless “Hand of God” first.

Trump, who coined the phrase “truthful hyperbole” in his book The Art of the Deal, would surely have admired the way Infantino declared: “This will not just be the greatest football event, this will be the greatest event in human history, the greatest event that humanity will ever witness … This is like 104 Super Bowls in one month.”

The Lex Luthor of world football got Americans in the audience to chant “USA! USA! USA!” then Canadians to chant “Canada, Canada, Canada!” and Mexicans to chant “Mexico, Mexico, Mexico!” Then, after a noisy display by Nicole Scherzinger and Robbie Williams, it was time for Trump’s moment of glory.

As a glossy video played, a voiceover tried to convince everyone this prize had not just been made up entirely for Trump’s benefit. “Peace creates hope and football translates that hope into unity,” it said.

“We honour a dynamic leader who has engaged in diplomatic efforts that create opportunities for dialogue, de-escalation and stability and who has championed the unifying power of football on the world stage.”

It was more wordy justification than the Dodo offered in Alice’s Adventures in Wonderland: “All must have prizes.”

The narration ran through the dodgy list of eight conflicts that Trump claims to have settled during his 10 months in office. It did not mention his fawning over Russia’s Vladimir Putin or extrajudicial killing of dozens of unnamed, untried people on small boats in the Caribbean. Any chance of VAR on this decision?

The audience was treated to slow-motion video of Trump at the Gaza peace summit, Trump meeting the Indian prime minister, Narendra Modi, Trump signing the Abraham accords, Trump with the leaders of the Democratic Republic of the Congo and Rwanda – and of Infantino giving him a thumbs-up like a proud football dad.

Then came the presentation and, shortly afterwards, Trump standing on stage alongside the Canadian prime minister, Mark Carney, and Mexican president, Claudia Sheinbaum, behind plastic stands as if taking part in a gameshow. The US president tried to go all Ted Lasso, reminiscing about watching Pelé play for the New York Cosmos and admitting that soccer should be called “football”.

Once the convoluted draw – did Trump’s eyes stay open? – was done, the show ended like a Trump rally with the Village People belting out Y.M.C.A. The president had got his prize and Infantino had got his man. Next stop the Oscars?

mistralai/mistral-vibe

Simon Willison
simonwillison.net
2025-12-09 20:19:21
mistralai/mistral-vibe Here's the Apache 2.0 licensed source code for Mistral's new "Vibe" CLI coding agent, released today alongside Devstral 2. It's a neat implementation of the now standard terminal coding agent pattern, built in Python on top of Pydantic and Rich/Textual (here are the dependenci...
Original Article

mistralai/mistral-vibe . Here's the Apache 2.0 licensed source code for Mistral's new "Vibe" CLI coding agent, released today alongside Devstral 2.

It's a neat implementation of the now standard terminal coding agent pattern, built in Python on top of Pydantic and Rich/Textual (here are the dependencies .) Gemini CLI is TypeScript, Claude Code is closed source (TypeScript, now on top of Bun ), OpenAI's Codex CLI is Rust. OpenHands is the other major Python coding agent I know of, but I'm likely missing some others.

The Vibe source code is pleasant to read and the crucial prompts are neatly extracted out into Markdown files. Some key places to look:

The Python implementations of those tools can be found here .

I tried it out and had it build me a Space Invaders game using three.js with the following prompt:

make me a space invaders game as HTML with three.js loaded from a CDN

Animated screenshot demo of Mistral Vibe running in a terminal. The text reads: I've created a Space Invaders game using HTML and Three. js loaded from a CDN. The game is now available in the file space_invaders.html in your current directory. Here's how to play: 1. Open the space_invaders.html file in a web browser 2. Use the left and right arrow keys to move your player (green rectangle) 3. Press the spacebar to shoot at the invaders (red rectangles) 4. Try to get the highest score before the invaders reach you or hit you with their bullets The game features: © Player movement with arrow keys © Shooting mechanics with spacebar © Enemy invaders that move back and forth © Collision detection « Score tracking * Game over screen © Increasing difficulty Writing file (64s esc to interrupt) »» auto-approve on (shift-tab to toggle) - 7% of 100k tokens

Here's the source code and the live game . It did OK.

Official Propaganda for Caribbean Military Buildup Includes “Crusader Cross”

Intercept
theintercept.com
2025-12-09 20:11:31
Once eschewed by the Pentagon, the “Jerusalem cross” has been co-opted by the far right — and embraced by Pete Hegseth. The post Official Propaganda for Caribbean Military Buildup Includes “Crusader Cross” appeared first on The Intercept....
Original Article

An official U.S. military social media account on Monday shared a photo collage that included a symbol long affiliated with extremist groups — and Secretary of Defense Pete Hegseth.

In a post on X trumpeting the deployment of troops to the Caribbean, U.S. Southern Command, or SOUTHCOM, shared an image that prominently displayed a so-called Jerusalem cross on the helmet of a masked commando.

The Jerusalem cross, also dubbed the “Crusader cross” for its roots in Medieval Christians’ holy wars in the Middle East, is not inherently a symbol of extremism. It has, however, become popular on the right to symbolize the march of Christian civilization, with anti-Muslim roots that made it into something of a logo for the U.S. war on terror.

Tattoos of the cross, a squared-off symbol with a pattern of repeating crosses, have appeared on the bodies of people ranging from mercenaries hired by the Gaza Humanitarian Foundation to Hegseth himself.

Now, the symbol has reared its head again to advertise President Donald Trump’s military buildup against Venezuela — an overwhelmingly Catholic country — and boat strikes in the Caribbean.

“As with all things Trump, it’s a continuation, with some escalation, and then a transformation into spectacle,” said Yale University historian Greg Grandin, whose work focuses on U.S. empire in Latin America .

The social media post came amid rising controversy over a series of strikes on boats allegedly carrying drugs off the coast of Venezuela, dubbed Operation Southern Spear.

Hegseth is alleged to have ordered a so-called “ double-tap ” strike, a follow-up attack against a debilitated boat that killed survivors clinging to the wreckage for around 45 minutes. The U.S. has carried out 22 strikes since the campaign began in September, killing a total of 87 people .

The Pentagon’s press office declined to comment on the use of the Jerusalem cross, referring questions to SOUTHCOM. But in a reply to the X post on Monday, Hegseth’s deputy press secretary Joel Valdez signaled his approval with emojis of a salute and the American flag. In a statement to the Intercept, SOUTHCOM spokesperson Steven McLoud denied that the post implied any religious or far-right message.

“The graphic you’re referring to was an illustration of service members in a ready posture during Operation SOUTHERN SPEAR,” McLoud told The Intercept. “There is no other communication intent for this image.”

The original image of the masked service member appears to have come from an album published online by the Pentagon that depicts a training exercise by Marines aboard the USS Iwo Jima in the Caribbean Sea in October. The photo depicting the cross, however, was removed from the album after commentators on social media pointed out its origins.

Amanda Saunders, a spokesperson for the Defense Visual Information Distribution Service, the Pentagon-run photo agency, said she was unable to comment directly but forwarded the request to the Marine unit involved in the exercise.

“Content on DVIDS is published and archived directly by the registered units,” she said, “so we don’t have control over what is posted or removed, nor are we able to comment on those decisions.”

Hegseth and the Cross

The Jerusalem cross’s popularity on the right has surged in part thanks to featuring in various media, including the 2005 Ridley Scott film “Kingdom of Heaven” and video games, according to Matthew Gabriele, a professor of medieval studies at Virginia Tech and a scholar of Crusader iconography.

“It supports the rhetoric of ‘defense of homeland.’”

“It supports the rhetoric of ‘defense of homeland,’” Gabriele told The Intercept, “because the crusaders, in the right’s understanding, were waging a defensive war against enemies trying to invade Christian lands.”

The symbol’s position of prominence in official military communications is just the latest example of a trollish extremism by the Trump administration’s press teams, which have made a point of reveling in the cruelty wrought on its perceived enemies at home and abroad, or “owning the libs.”

Monday’s post may also be intended as Hegseth putting his thumb in the eye of the Pentagon’s old guard. Hegseth’s embrace of the symbol — in the form of a gawdy chest tattoo — once stymied, however temporarily, his ambitions in the military.

Folling the January 6 insurrection, according to Hegseth and reporting by the Washington Post, Hegseth was ordered to stand down rather than deploy with his National Guard unit ahead of the 2021 inauguration of Joe Biden. The decision to treat Hegseth as a possible “insider threat” came after a someone flagged a photo of a shirtless Hegseth to military brass, according to the Washington Post .

“I joined the Army in 2001 because I wanted to serve my country. Extremists attacked us on 9/11, and we went to war,” Hegseth wrote “The War on Warriors,” his 2024 memoir. “Twenty years later, I was deemed an ‘extremist’ by that very same Army.”

Hegseth was hardly chastened by the episode and has since gotten more tattoos with more overt anti-Muslim resonance , including the Arabic word word for “infidel,” which appeared on his bicep sometime in the past several years. It’s accompanied by another bicep tattoo of the Latin words “Deus vult,” or “God wills it,” yet another slogan associated with the Crusades and repurposed by extremist groups.

The use of the image to advertise aggressive posturing in a majority-Christian region like Latin America may seem odd at first glance. In the context of renewed U.S. focus on Latin America, however, it’s a potent symbol of the move of military action from the Middle East to the Western Hemisphere.

“They’re globalizing the Monroe Doctrine.”

The post comes on the heels of the release of the Trump’s National Security Strategy , a 33-page document outlining the administration’s foreign-policy priorities that explicitly compared Trump’s stance to the Monroe Doctrine, the turn-of-the-century policy of U.S. dominance in Latin America in opposition to colonialism by other foreign powers. Grandin, the Yale historian, described the document as a “vision of global dominance” based on a model of great-powers competition that can lead to immense instability.

“They’re globalizing the Monroe Doctrine,” Grandin said. “I’m no fan of the hypocrisy and arrogance of the old liberal international order, but there’s something to be said for starting from a first principle of shared interests, which does keep great conflict at bay to some degree.”

Agentic AI Foundation

Hacker News
block.xyz
2025-12-09 20:00:39
Comments...

You Can’t Please a Madman

Daring Fireball
truthsocial.com
2025-12-09 19:58:52
Donald Trump, on his blog: The only reason Marjorie “Traitor” Brown (Green turns Brown under stress!) went BAD is that she was JILTED by the President of the United States (Certainly not the first time she has been jilted!). Too much work, not enough time, and her ideas are, NOW, really BAD — Sh...

Microsoft releases Windows 10 KB5071546 extended security update

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 19:54:26
Microsoft has released the KB5071546 extended security update to resolve 57 security vulnerabilities, including three zero-day flaws. [...]...
Original Article

Windows 10

Microsoft has released the KB5071546 extended security update to resolve 57 security vulnerabilities, including three zero-day flaws.

If you are running Windows 10 Enterprise LTSC or are enrolled in the ESU program, you can install this update like normal by going into Settings , clicking on Windows Update, and manually performing a 'Check for Updates .'

Windows 10 KB5071546 update
Windows 10 KB5071546 update
Source: BleepingComputer

As this update is mandatory, it will automatically install and prompt you to restart your device when it is complete.

After installing this update, Windows 10 will be updated to build 19045.6691, and Windows 10 Enterprise LTSC 2021 will be updated to build 19044.6691.

What's new in Windows 10 KB5071546

Microsoft is no longer releasing new features for Windows 10, and the KB5071546 update contains only security updates and fixes for bugs introduced by previous security updates.

With this release, Microsoft has fixed a remote code execution zero-day vulnerability in PowerShell tracked as CVE-2025-54100 that could allow malicious scripts embedded in a webpage to be executed when the page is retrieved using the " Invoke-WebRequest " command:

When running PowerShell scripts that use the " Invoke-WebRequest " command, PowerShell 5.1 (the default version on Windows 10) will now display a warning that this could cause scripts on the page to be executed.

If a page is untrusted, Windows users should use the -UseBasicParsing command line argument to prevent embedded scripts from being parsed.

Security Warning: Script Execution Risk

Invoke-WebRequest parses the content of the web page. Script code in the web page might be run when the page is parsed.

Security Warning: Script Execution Risk
Invoke-WebRequest parses the content of the web page. Script code in the web page might be run when the page is parsed.
      RECOMMENDED ACTION:
      Use the -UseBasicParsing switch to avoid script code execution.
      Do you want to continue?
			```
 
 
For additional details, see [KB5074596: PowerShell 5.1: Preventing script execution from web content](https://support.microsoft.com/help/5072034).

Microsoft has released an advisory on when and how to use this command-line flag.

Microsoft states that there are no known issues with this update.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

10 Years of Let's Encrypt Certificates

Lobsters
letsencrypt.org
2025-12-09 19:53:28
Comments...
Original Article

On September 14, 2015, our first publicly-trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients could accept, and had done it using automated software . Of course, in retrospect this was just the first of billions of certificates. Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.

In 2023, we marked the tenth anniversary of the creation of our nonprofit , Internet Security Research Group, which continues to host Let’s Encrypt and other public benefit infrastructure projects. Now, in honor of the tenth anniversary of Let’s Encrypt’s public certificate issuance and the start of the general availability of our services, we’re looking back at a few milestones and factors that contributed to our success.

Growth

A conspicuous part of Let’s Encrypt’s history is how thoroughly our vision of scalability through automation has succeeded.

In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day. We’re now on track to reach a billion active sites, probably sometime in the coming year. (The “certificates issued” and “certificates active” metrics are quite different because our certificates regularly expire and get replaced.)

The steady growth of our issuance volume shows the strength of our architecture, the validity of our vision, and the great efforts of our engineering team to scale up our own infrastructure. It also reminds us of the confidence that the Internet community is placing in us, making the use of a Let’s Encrypt certificate a normal and, dare we say, boring choice. But I often point out that our ever-growing issuance volumes are only an indirect measure of value. What ultimately matters is improving the security of people’s use of the web, which, as far as Let’s Encrypt’s contribution goes, is not measured by issuance volumes so much as by the prevalence of HTTPS encryption. For that reason, we’ve always emphasized the graph of the percentage of encrypted connections that web users make (here represented by statistics from Firefox).

(These graphs are snapshots as of the date of this post; a dynamically updated version is found on our stats page .) Our biggest goal was to make a concrete, measurable security impact on the web by getting HTTPS connection prevalence to increase—and it’s worked. It took five years or so to get the global percentage from below 30% to around 80%, where it’s remained ever since. In the U.S. it has been close to 95% for a while now.

A good amount of the remaining unencrypted traffic probably comes from internal or private organizational sites (intranets), but other than that we don’t know much about it; this would be a great topic for Internet security researchers to look into.

We believe our present growth in certificate issuance volume is essentially coming from growth in the web as a whole. In other words, if we protect 20% more sites over some time period, it’s because the web itself grew by 20%.

A few milestones

We’ve blogged about most of Let’s Encrypt’s most significant milestones as they’ve happened, and I invite everyone in our community to look over those blog posts to see how far we’ve come. We’ve also published annual reports for the past seven years , which offer elegant and concise summaries of our work.

As I personally think back on the past decade, just a few of the many events that come to mind include:

We’ve also periodically rolled out new features such as internationalized domain name support (2016), wildcard support (2018), and short-lived and IP address (2025) certificates. We’re always working on more new features for the future.

There are many technical milestones like our database server upgrades in 2021, where we found we needed a serious server infrastructure boost because of the tremendous volumes of data we were dealing with. Similarly, our original infrastructure was using Gigabit Ethernet internally, and, with the growth of our issuance volume and logging, we found that our Gigabit Ethernet network eventually became too slow to synchronize database instances! (Today we’re using 25-gig Ethernet.) More recently, we’ve experimented with architectural upgrades to our ever-growing Certificate Transparency logs, and decided to go ahead with deploying those upgrades —to help us not just keep up with, but get ahead of, our continuing growth.

These kinds of growing pains and successful responses to them are nice to remember because they point to the inexorable increase in demands on our infrastructure as we’ve become a more and more essential part of the Internet. I’m proud of our technical teams which have handled those increased demands capably and professionally.

I also recall the ongoing work involved in making sure our certificates would be as widely accepted as possible , which has meant managing the original cross-signature from IdenTrust, and subsequently creating and propagating our own root CA certificates . This process has required PKI engineering, key ceremonies, root program interactions, documentation, and community support associated with certificate migrations. Most users never have reason to look behind the scenes at our chains of trust , but our engineers update it as root and intermediate certificates have been replaced. We’ve engaged at the CA/B Forum , IETF , and in other venues with the browser root programs to help shape the web PKI as a technical leader.

As I wrote in 2020 , our ideal of complete automation of the web PKI aims at a world where most site owners wouldn’t even need to think about certificates at all. We continue to get closer and closer to that world, which creates a risk that people will take us and our services for granted, as the details of certificate renewal occupy less of site operators’ mental energy. As I said at the time,

When your strategy as a nonprofit is to get out of the way, to offer services that people don’t need to think about, you’re running a real risk that you’ll eventually be taken for granted. There is a tension between wanting your work to be invisible and the need for recognition of its value. If people aren’t aware of how valuable our services are then we may not get the support we need to continue providing them.

I’m also grateful to our communications and fundraising staff who help make clear what we’re doing every day and how we’re making the Internet safer.

Recognition of Let’s Encrypt

Our community continually recognizes our work in tangible ways by using our certificates—now by the tens of millions per day—and by sponsoring us .

We were honored to be recognized with awards including the 2022 Levchin Prize for Real-World Cryptography and the 2019 O’Reilly Open Source Award . In October of this year some of the individuals who got Let’s Encrypt started were honored to receive the IEEE Cybersecurity Award for Practice .

We documented the history, design, and goals of the project in an academic paper at the ACM CCS ‘19 conference , which has subsequently been cited hundreds of times in academic research.

Ten years later, I’m still deeply grateful to the five initial sponsors that got Let’s Encrypt off the ground - Mozilla, EFF, Cisco, Akamai, and IdenTrust. When they committed significant resources to the project, it was just an ambitious idea. They saw the potential and believed in our team, and because of that we were able to build the service we operate today.

IdenTrust: A critical technical partner

I’d like to particularly recognize IdenTrust , a PKI company that worked as a partner from the outset and enabled us to issue publicly-trusted certificates via a cross-signature from one of their roots. We would simply not have been able to launch our publicly-trusted certificate service without them. Back when I first told them that we were starting a new nonprofit certificate authority that would give away millions of certificates for free, there wasn’t any precedent for this arrangement, and there wasn’t necessarily much reason for IdenTrust to pay attention to our proposal. But the company really understood what we were trying to do and was willing to engage from the beginning. Ultimately, IdenTrust’s support made our original issuance model a reality.

Conclusion

I’m proud of what we have achieved with our staff, partners, and donors over the past ten years. I hope to be even more proud of the next ten years, as we use our strong footing to continue to pursue our mission to protect Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet.

Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. You can help us make the next ten years great as well by donating or becoming a sponsor .

The Big Vitamin D Mistake [pdf]

Hacker News
pmc.ncbi.nlm.nih.gov
2025-12-09 19:36:03
Comments...
Original Article
No preview for link for known binary extension (.pdf), Link: https://pmc.ncbi.nlm.nih.gov/articles/PMC5541280/pdf/jpmph-50-4-278.pdf.

MCP Joins the Agentic AI Foundation

Hacker News
blog.modelcontextprotocol.io
2025-12-09 19:26:54
Comments...
Original Article

Today marks a major milestone for the Model Context Protocol. Anthropic is donating MCP to the Agentic AI Foundation, a directed fund under the Linux Foundation. MCP will become a founding project of the newly created foundation.

In one year, MCP has become one of the fastest-growing and widely-adopted open-source projects in AI: Over 97 million monthly SDK downloads, 10,000 active servers and first-class client support across major AI platforms like ChatGPT, Claude, Cursor, Gemini, Microsoft Copilot, Visual Studio Code and many more.

Since its inception, we’ve remained committed to ensuring MCP remains open and community-driven. This move formalizes that commitment—ensuring MCP’s vendor-neutrality and long-term independence under the same neutral stewardship that supports Kubernetes, PyTorch, and Node.js. Anthropic’s commitment to MCP is unchanged: we will continue to invest in its development, maintain core infrastructure, and actively participate in the community.

MCP will be a founding project of the newly created Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation, co-founded by Anthropic, Block and OpenAI, with support from Google, Microsoft, AWS, Cloudflare and Bloomberg to advance open-source innovation in agentic AI.

MCP joins two other founding projects: goose by Block and AGENTS.md by OpenAI as founding projects.

The AAIF Governing Board will make decisions regarding strategic investments, budget allocation, member recruitment, and approval of new projects, while individual projects, such as MCP, maintain full autonomy over their technical direction and day-to-day operations.

MCP’s maintainer structure stays the same

For MCP little changes. The governance model we introduced earlier this year continues as is. The people making decisions about the protocol are still the maintainers who have been stewarding it, guided by community input through our SEP process.

The Linux Foundation provides a neutral home and infrastructure that allows maintainers to operate independently, and will not dictate the technical direction of MCP.

Thank you

To all who’ve adopted and contributed to MCP so far, thank you. None of this would’ve been possible without your contribution. From building servers to maintaining SDKs to filing issues to improving documentation to welcoming new visitors and everything in between, you’ve made MCP what it is today.

Here’s to MCP’s next chapter under the Linux Foundation’s stewardship.

10 Years of Let's Encrypt

Hacker News
letsencrypt.org
2025-12-09 18:54:55
Comments...
Original Article

On September 14, 2015, our first publicly-trusted certificate went live. We were proud that we had issued a certificate that a significant majority of clients could accept, and had done it using automated software . Of course, in retrospect this was just the first of billions of certificates. Today, Let’s Encrypt is the largest certificate authority in the world in terms of certificates issued, the ACME protocol we helped create and standardize is integrated throughout the server ecosystem, and we’ve become a household name among system administrators. We’re closing in on protecting one billion web sites.

In 2023, we marked the tenth anniversary of the creation of our nonprofit , Internet Security Research Group, which continues to host Let’s Encrypt and other public benefit infrastructure projects. Now, in honor of the tenth anniversary of Let’s Encrypt’s public certificate issuance and the start of the general availability of our services, we’re looking back at a few milestones and factors that contributed to our success.

Growth

A conspicuous part of Let’s Encrypt’s history is how thoroughly our vision of scalability through automation has succeeded.

In March 2016, we issued our one millionth certificate. Just two years later, in September 2018, we were issuing a million certificates every day. In 2020 we reached a billion total certificates issued and as of late 2025 we’re frequently issuing ten million certificates per day. We’re now on track to reach a billion active sites, probably sometime in the coming year. (The “certificates issued” and “certificates active” metrics are quite different because our certificates regularly expire and get replaced.)

The steady growth of our issuance volume shows the strength of our architecture, the validity of our vision, and the great efforts of our engineering team to scale up our own infrastructure. It also reminds us of the confidence that the Internet community is placing in us, making the use of a Let’s Encrypt certificate a normal and, dare we say, boring choice. But I often point out that our ever-growing issuance volumes are only an indirect measure of value. What ultimately matters is improving the security of people’s use of the web, which, as far as Let’s Encrypt’s contribution goes, is not measured by issuance volumes so much as by the prevalence of HTTPS encryption. For that reason, we’ve always emphasized the graph of the percentage of encrypted connections that web users make (here represented by statistics from Firefox).

(These graphs are snapshots as of the date of this post; a dynamically updated version is found on our stats page .) Our biggest goal was to make a concrete, measurable security impact on the web by getting HTTPS connection prevalence to increase—and it’s worked. It took five years or so to get the global percentage from below 30% to around 80%, where it’s remained ever since. In the U.S. it has been close to 95% for a while now.

A good amount of the remaining unencrypted traffic probably comes from internal or private organizational sites (intranets), but other than that we don’t know much about it; this would be a great topic for Internet security researchers to look into.

We believe our present growth in certificate issuance volume is essentially coming from growth in the web as a whole. In other words, if we protect 20% more sites over some time period, it’s because the web itself grew by 20%.

A few milestones

We’ve blogged about most of Let’s Encrypt’s most significant milestones as they’ve happened, and I invite everyone in our community to look over those blog posts to see how far we’ve come. We’ve also published annual reports for the past seven years , which offer elegant and concise summaries of our work.

As I personally think back on the past decade, just a few of the many events that come to mind include:

We’ve also periodically rolled out new features such as internationalized domain name support (2016), wildcard support (2018), and short-lived and IP address (2025) certificates. We’re always working on more new features for the future.

There are many technical milestones like our database server upgrades in 2021, where we found we needed a serious server infrastructure boost because of the tremendous volumes of data we were dealing with. Similarly, our original infrastructure was using Gigabit Ethernet internally, and, with the growth of our issuance volume and logging, we found that our Gigabit Ethernet network eventually became too slow to synchronize database instances! (Today we’re using 25-gig Ethernet.) More recently, we’ve experimented with architectural upgrades to our ever-growing Certificate Transparency logs, and decided to go ahead with deploying those upgrades —to help us not just keep up with, but get ahead of, our continuing growth.

These kinds of growing pains and successful responses to them are nice to remember because they point to the inexorable increase in demands on our infrastructure as we’ve become a more and more essential part of the Internet. I’m proud of our technical teams which have handled those increased demands capably and professionally.

I also recall the ongoing work involved in making sure our certificates would be as widely accepted as possible , which has meant managing the original cross-signature from IdenTrust, and subsequently creating and propagating our own root CA certificates . This process has required PKI engineering, key ceremonies, root program interactions, documentation, and community support associated with certificate migrations. Most users never have reason to look behind the scenes at our chains of trust , but our engineers update it as root and intermediate certificates have been replaced. We’ve engaged at the CA/B Forum , IETF , and in other venues with the browser root programs to help shape the web PKI as a technical leader.

As I wrote in 2020 , our ideal of complete automation of the web PKI aims at a world where most site owners wouldn’t even need to think about certificates at all. We continue to get closer and closer to that world, which creates a risk that people will take us and our services for granted, as the details of certificate renewal occupy less of site operators’ mental energy. As I said at the time,

When your strategy as a nonprofit is to get out of the way, to offer services that people don’t need to think about, you’re running a real risk that you’ll eventually be taken for granted. There is a tension between wanting your work to be invisible and the need for recognition of its value. If people aren’t aware of how valuable our services are then we may not get the support we need to continue providing them.

I’m also grateful to our communications and fundraising staff who help make clear what we’re doing every day and how we’re making the Internet safer.

Recognition of Let’s Encrypt

Our community continually recognizes our work in tangible ways by using our certificates—now by the tens of millions per day—and by sponsoring us .

We were honored to be recognized with awards including the 2022 Levchin Prize for Real-World Cryptography and the 2019 O’Reilly Open Source Award . In October of this year some of the individuals who got Let’s Encrypt started were honored to receive the IEEE Cybersecurity Award for Practice .

We documented the history, design, and goals of the project in an academic paper at the ACM CCS ‘19 conference , which has subsequently been cited hundreds of times in academic research.

Ten years later, I’m still deeply grateful to the five initial sponsors that got Let’s Encrypt off the ground - Mozilla, EFF, Cisco, Akamai, and IdenTrust. When they committed significant resources to the project, it was just an ambitious idea. They saw the potential and believed in our team, and because of that we were able to build the service we operate today.

IdenTrust: A critical technical partner

I’d like to particularly recognize IdenTrust , a PKI company that worked as a partner from the outset and enabled us to issue publicly-trusted certificates via a cross-signature from one of their roots. We would simply not have been able to launch our publicly-trusted certificate service without them. Back when I first told them that we were starting a new nonprofit certificate authority that would give away millions of certificates for free, there wasn’t any precedent for this arrangement, and there wasn’t necessarily much reason for IdenTrust to pay attention to our proposal. But the company really understood what we were trying to do and was willing to engage from the beginning. Ultimately, IdenTrust’s support made our original issuance model a reality.

Conclusion

I’m proud of what we have achieved with our staff, partners, and donors over the past ten years. I hope to be even more proud of the next ten years, as we use our strong footing to continue to pursue our mission to protect Internet users by lowering monetary, technological, and informational barriers to a more secure and privacy-respecting Internet.

Let’s Encrypt is a project of the nonprofit Internet Security Research Group, a 501(c)(3) nonprofit. You can help us make the next ten years great as well by donating or becoming a sponsor .

Instagram Is Generating Inaccurate SEO Bait for Your Posts

403 Media
www.404media.co
2025-12-09 18:46:16
Instagram is generating headlines for Instagram posts that appear on Google Search results. Users say they are misrepresenting them....
Original Article

Instagram is generating headlines for users’ Instagram posts without their knowledge, seemingly in an attempt to get those posts to rank higher in Google Search results.

I first noticed Instagram-generated headlines thanks to a Bluesky post from the author Jeff VanderMeer. Last week, VanderMeer posted a video to Instagram of a bunny eating a banana. VanderMeer didn’t include a caption or comment with the post, but noticed that it appeared in Google Search results with the following headline: “Meet the Bunny Who Loves Eating Bananas, A Nutritious Snack For Your Pet.”

Jeff VanderMeer (@jeffvandermeer.bsky.social)

This post requires authentication to view.

Bluesky Social

Another Instagram post from the Groton Public Library in Massachusetts—an image of VanderMeer’s Annihilation book cover promoting a group reading—also didn’t include a caption or comment, but appears on Google Search results with the following headline “Join Jeff VanderMeer on a Thrilling Beachside Adventure with Mesta …”

Jeff VanderMeer (@jeffvandermeer.bsky.social)

This post requires authentication to view.

Bluesky Social

I’ve confirmed that Instagram is generating headlines in a similar style for other users without their knowledge. One cosplayer who wished to remain anonymous posted a video of herself showing off costumes in various locations. The same post appeared on Google with a headline about discovering real-life locations to do cosplaying in Seattle. This Instagram mentioned the city in a hashtag but did not write anything resembling that headline.

Google told me that it is not generating the headlines, and that it’s pulling the text directly from Instagram. Meta acknowledged my request for comment but did not respond in time for publication. I’ll update this story if I hear back.

“I hate it,” VanderMeer told me in an email. “If I post content, I want to be the one contextualizing it, not some third party. It's especially bad because they're using the most click-bait style of headline generation, which is antithetical to how I try to be on social—which is absolutely NOT calculated, but organic, humorous, and sincere. Then you add in that this is likely an automated AI process, which means unintentionally contributing to theft and a junk industry, and that the headlines are often inaccurate and the summary descriptions below the headline even worse... basically, your post through search results becomes shitty spam.”

“I would not write mediocre text like that and it sounds as if it was auto-generated at-scale with an LLM. This becomes problematic when the headline or description advertises someone in a way that is not how they would personally describe themselves,” Brian Dang, another cosplayer who goes by @mrdangphotos and noticed Instagram generated headlines for his posts, told me. We don’t know how exactly Instagram is generating these headlines.

By using Google's Rich Result Test tool , which shows what Google sees for any site, I saw that these headlines appeared under the <title></title> tags for those post’s Instagram pages.

“It appears that Instagram is only serving that title to Google (and perhaps other search bots),” Jon Henshaw, a search engine optimization (SEO) expert and editor of Coywolf, told me in an email. “I couldn't find any reference to it in the pre-rendered or rendered HTML in Chrome Dev Tools as a regular visitor on my home network. It does appear like Instagram is generating titles and doing it explicitly for search engines.”

When I looked at the code for these pages, I saw that Instagram was also generating long descriptions for posts without the user’s knowledge, like: “Seattle’s cosplay photography is a treasure trove of inspiration for fans of the genre. Check out these real-life cosplay locations and photos taken by @mrdangphotos. From costumes to locations, get the scoop on how to recreate these looks and capture your own cosplay moments in Seattle.”

Neither the generated headlines or the descriptions are the alternative text (alt text) that Instagram automatically generates for accessibility reasons . To create alt text, Instagram uses computer vision and artificial intelligence to automatically create a description of the image that people who are blind or have low-vision can access with a screen reader. Sometimes the alt text Instagram generates appears under the headline in Google Search results. At other times, generated description copy that is not the alt text appears in the same place. We don’t know how exactly Instagram is creating these headlines, but it could use similar technology.

“The larger implications are terrible—search results could show inaccurate results that are reputationally damaging or promulgating a falsehood that actively harms someone who doesn't drill down,” VanderMeer said. “And we all know we live in a world where often people are just reading the headline and first couple of paragraphs of an article, so it's possible something could go viral based on a factual misunderstanding.”

About the author

Emanuel Maiberg is interested in little known communities and processes that shape technology, troublemakers, and petty beefs. Email him at emanuel@404media.co

Emanuel Maiberg

‘Netflix and the Hollywood End Game’

Daring Fireball
stratechery.com
2025-12-09 18:45:19
Ben Thompson at Stratechery yesterday: It’s important to note that the President does not have final say in the matter: President Trump directed the DOJ to oppose AT&T’s acquisition of Time Warner, but the DOJ lost in federal court, much to AT&T’s detriment. Indeed, the irony of mergers ...
Original Article

Warner Bros. started with distribution. Just after the turn of the century, Harry, Albert, Sam, and Jack Warner bought a second-hand projector and started showing short films in Ohio and Pennsylvania mining towns; in 1907 they bought their first permanent theater in New Castle, Pennsylvania. Around the same time the brothers also began distributing films to other theaters, and in 1908 began producing their own movies in California. In 1923 the brothers formally incorporated as Warner Bros. Pictures, Inc., becoming one of the five major Hollywood Studios.

What the brothers realized early on was that distribution just wasn’t a very good business: you had to maintain the theater and find films to show, and your profit was capped by your capacity, which you had to work diligently to fill out; after all, every empty seat in a showing was potential revenue that disappeared forever. What was far more lucrative was making the films shown in those theaters: you could film a movie once and make money on it again and again.

In this Hollywood was the tech industry before there was a tech industry, which is to say the studios were the industry that focused its investment on large-up-front costs that could be leveraged repeatedly to make money. Granted, Warner Bros., along with the rest of Hollywood, did come to own large theater chains as well as part of fully integrated companies, but when the Supreme Court, with 1948’s Paramount decrees, forced them to split, it was the theaters that got spun out: making content was simply a much better business than distributing it.

That business only got better over time. First, television provided an expansive new licensing opportunity for films and eventually TV shows; not only were there more televisions than theaters, but they were accessible at all hours in the home. Then, home video added a new window: movies could not only make money in theaters and on TV, but there were entirely new opportunities to rent and sell recordings. The real bonanza, however, was the cable bundle: now, instead of needing to earn discrete revenue, the majority of Hollywood revenue became a de facto annuity, as 90% of households paid an ever increasing amount of money every month to have access to a universe of content they mostly didn’t watch.

Internet Distribution and Aggregation

Netflix, which was founded in 1997, also started with distribution, specifically of DVDs-by-mail; the streaming service that the company is known for today launched in 2007, 100 years after the Warner brothers bought their theater. The differences were profound: because Netflix was on the Internet, it was available literally everywhere; there were no seats to clean or projectors to maintain, and every incremental customer was profit. More importantly, the number of potential customers was, at least in theory, the entire population of the world. That, in a nutshell, is why the Internet is different : you can, from day one, reach anyone, with zero marginal cost.

Netflix did, over time, like Warner Bros. before them, backwards integrate into producing their own content. Unlike Warner Bros., however, that content production was and has always only ever been in service of Netflix’s distribution. What Netflix has understood — and what Hollywood, Warner Bros. included, was far too slow to realize — is that because of the Internet distribution is even more scalable than content.

The specifics of this are not obvious; after all, content is scarce and exclusive, while everyone can access the Internet. However, it’s precisely because everyone can access the Internet that there is an abundance of content, far too much for anyone to consume; this gives power to Aggregators who sort that content on consumers’ behalf, delivering a satisfying user experience. Consumers flock to the Aggregator, which makes the Aggregator attractive to suppliers, giving them more content, which attracts more consumers, all in a virtuous cycle. Over time the largest Aggregators gain overwhelming advantages in customer acquisition costs and simply don’t churn users; that is the ultimate source of their economic power.

This is the lesson Hollywood studios have painfully learned over the last decade. As Netflix grew — and importantly, had a far more desirable stock multiple despite making inferior content — Hollywood studios wanted in on the game, and the multiple, and they were confident they would win because they had the content. Content is king, right? Well, it was, in a world of distribution limited by physical constraints; on the Internet, customer acquisition and churn mitigation in a world of infinite alternatives matters more, and that’s the advantage Netflix had, and that advantage has only grown.

Netflix Buys Warner Bros.

On Friday, Netflix announced it was buying Warner Bros.; from the Wall Street Journal :

Netflix has agreed to buy Warner Bros. for $72 billion after the entertainment company splits its studios and HBO Max streaming business from its cable networks, a deal that would reshape the entertainment and media industry. The cash-and-stock transaction was announced Friday after the two sides entered into exclusive negotiations for the media company known for Superman and the Harry Potter movies, as well as hit TV shows such as “Friends.” The offer is valued at $27.75 per Warner Discovery share and has an enterprise value of roughly $82.7 billion. Rival Paramount, which sought to buy the entire company, including Warner’s cable networks, bid $30 per share all-cash for Warner Discovery, according to people familiar with the matter. Paramount is weighing its next move, which could involve pivoting to other potential acquisitions, people familiar with its plans said.

Paramount’s bid, it should be noted, was for the entire Warner Bros. Discovery business, including the TV and cable networks that will be split off next year; Netflix is only buying the Warner Bros. part. Puck reported that the stub Netflix is leaving behind is being valued at $5/share, which would mean that Netflix outbid Paramount.

And, it should be noted, that Paramount money wouldn’t be from the actual business, which is valued at a mere $14 billion; new owner David Ellison is the son of Oracle founder Larry Ellison, who is worth $275 billion. Netflix, meanwhile, is worth $425 billion and generated $9 billion in cash flow over the last year. Absent family money this wouldn’t be anywhere close to a fair fight.

That’s exactly what you would expect given Netflix’s position — and the most optimistic scenario I painted back in 2016 :

Much of this analysis about the impact of subscriber numbers, growth rates, and churn apply to any SaaS company, but for Netflix the stakes are higher: the company has the potential to be an Aggregator , with the dominance and profits that follow from such a position.

To review: Netflix has acquired users through, among other things, a superior TV viewing experience. That customer base has given the company the ability to secure suppliers, which improve the attractiveness of the company’s offerings to users, which gives Netflix even more power over suppliers. The most bullish outcome in this scenario is Netflix as not simply another cable channel with a unique delivery method, but as the only TV you need with all of the market dominance over suppliers that entails.

The most obvious way that this scenario might have developed is that Netflix ends up being the only buyer for Hollywood suppliers, thanks to their ability to pay more by virtue of having the most customers; that is the nature of the company’s relationship with Sony , which had the foresight (and lack of lost TV network revenue to compensate for) to avoid the streaming wars and simply sell its content to the highest bidder. There are three specific properties I think of, however, that might be examples of what convinced Netflix it was worth simply buying one of the biggest suppliers entirely:

  • In 2019, Netflix launched Formula 1: Drive to Survive , which has been a massive success. The biggest upside recipient of that series, however, has not been Netflix, but Formula 1 owner Liberty Media. In 2018 Liberty Media offered the U.S. TV rights to ESPN for free; seven years later Apple signed a deal to broadcast Formula 1 for $150 million a year. That upside was largely generated by Netflix, who captured none of it.
  • In 2023, NBCUniversal licensed Suits to Netflix, and the show, long since stuck in the Peacock backwater, suddenly became the hottest thing in streaming. Netflix didn’t pay much, because the deal wasn’t exclusive, but it was suddenly apparent to everyone that Netflix had a unique ability to increase the value of library content.
  • In 2025, KPop Demon Hunters became a global phenomenon, and it’s difficult to see that happening absent the Netflix algorithm.

With regards to KPop Demon Hunters , I wrote in an Update :

How much of the struggle for original animation comes from the fact that no one goes to see movies on a lark anymore? Simply making it to the silver screen used to be the biggest hurdle; now that the theater is a destination — something you have to explicitly choose to do, instead of do on a Friday night by default — you need to actually sell, and that favors IP the audience is already familiar with.

In fact, this is the most ironic capstone to Netflix’s rise and the misguided chase by studios seeking to replicate their success: the latter thought that content mattered most, but in truth great content — and again, KPop Demon Hunters is legitimately good — needs distribution and “free” access in the most convenient way possible to prove its worth. To put it another way, KPop Demon Hunters is succeeding on its own merits, but those merits only ever had a chance to matter because they were accessible on the largest streaming service.

In short, I think that Netflix executives have become convinced that simply licensing shows is leaving money on the table: if Netflix is uniquely able to make IP more valuable, then the obvious answer is to own the IP. If the process of acquiring said IP helps force the long overdue consolidation of Hollywood studios, and takes a rival streamer off the board (and denies content to another rival), all the better. There are certainly obvious risks, and the price is high, but the argument is plausible.

Netflix’s Market and Threat

That phrase — “takes a rival streamer off the board” — also raises regulatory questions, and no industry gets more scrutiny than the media in this regard. That is sure to be the case for Netflix; from Bloomberg :

US President Donald Trump raised potential antitrust concerns around Netflix Inc.’s planned $72 billion acquisition of Warner Bros. Discovery Inc., noting that the market share of the combined entity may pose problems. Trump’s comments, made as he arrived at the Kennedy Center for an event on Sunday, may spur concerns regulators will oppose the coupling of the world’s dominant streaming service with a Hollywood icon. The company faces a lengthy Justice Department review of a deal that would reshape the entertainment industry.

“Well, that’s got to go through a process, and we’ll see what happens,” Trump said when asked about the deal, confirming he met Netflix co-Chief Executive Officer Ted Sarandos recently. “But it is a big market share. It could be a problem.”

It’s important to note that the President does not have final say in the matter: President Trump directed the DOJ to oppose AT&T’s acquisition of Time Warner, but the DOJ lost in federal court , much to AT&T’s detriment. Indeed, the irony of mergers and regulatory review is that is that the success of the latter is often inversely correlated to the wisdom of the former: the AT&T deal for Time Warner never made much sense, which is directly related to why it (correctly) was approved. It would have been economically destructive for AT&T to, say, limit Time Warner content to its networks, so suing over that theoretical possibility was ultimately unsuccessful.

This deal is more interesting.

  • First, it is in part a vertical merger, wherein a distributor is acquiring a supplier, which is generally approved. However, it seems likely that Netflix will, over time, make Warner Bros. content, particularly its vast libraries, exclusive to Netflix, instead of selling it to other distributors. This will be economically destructive in the short term, but it very well may be outweighed by the aforementioned increase in value that Netflix can drive to established IP, giving Netflix more pricing power over time (which will increase regulatory scrutiny).
  • Second, it is also in part a horizontal merger, because Netflix is acquiring a rival streaming service, and presumably taking it off the market. Horizontal mergers get much more scrutiny, because the explicit outcome is to reduce competition. The frustrating point for Netflix is that the company probably doesn’t weigh this point that heavily: it’s difficult to see HBO Max providing incremental customers to Netflix, as most HBO Max customers are also Netflix customers. Indeed, Netflix may argue that they will, at least in the short to medium term, be providing consumers benefit by giving them the same content for a price that is actually lower, since you’re only paying for one service (although again, the long-term goal would be to increase pricing power).

The complaint, if there ends up being one, will, as is so often the case, come down to market definition. If the market is defined extremely narrowly as subscription streaming services, then Netflix will have a harder time; if the market is defined as TV viewing broadly, then Netflix has a good defense: that definition includes linear TV, YouTube, etc., where Netflix’s share is both much smaller and also (correctly) includes their biggest threat (YouTube).

That YouTube is Netflix’s biggest threat speaks to a broader point: because of the Internet there is no scarcity in terms of access to customers; it’s not as if there are a limited number of Internet packets, as there once were a limited number of TV channels. Everything is available to everyone, which means the only scarce resource is people’s time and attention. If this were the market definition — which is the market all of these companies actually care about — then the list of competitors expands beyond TV and YouTube to include social media and user-generated content broadly: TikTok, to take an extreme example, really is a Netflix competitor for the only scarce resource that is left.

Ultimately, however, I think that everything Netflix does has to be framed in the context of the aforementioned YouTube threat. YouTube has not only long surpassed Netflix in consumer time spent generally, but also TV time specifically, and has done so with content it has acquired for free. That is very difficult to compete with in the long run: YouTube will always have more new content than anyone else.

The one big advantage professionally-produced content has, however, is that it tends to be more evergreen and have higher re-watchability. That’s where we come back to the library: implicit in Netflix making library content more valuable is that library content has longevity in a way that YouTube content does not. That, by extension, may speak to why Netflix has decided to initiate the Hollywood end game now: the real threat to Hollywood isn’t (just) that the Internet made distribution free, favoring the Aggregators; it’s that technology has made it possible for anyone to create content, and the threat isn’t theoretical: it’s winning in the market. Netflix may be feared by the town, but everyone in Hollywood should fear the fact that anyone can be a creator much more.

So You Want to Speak at Software Conferences?

Hacker News
dylanbeattie.net
2025-12-09 18:42:27
Comments...
Original Article

I run a .NET user group here in London , and we host a lot of talks from people who are relatively inexperienced presenters. Sometimes they’ve done presentations internally but never spoken before a public audience. Sometimes they’re developers who have been in theatre or played in bands; people with plenty of stage experience but who haven’t presented on technical topics before - and sometimes they’ve never done any kind of public presentations or performance at all. We aim to be a friendly, supportive crowd; public speaking can be daunting, and the first public outing of somebody’s first talk can be… let’s just say that the presenter sometimes learns a lot more than the audience, and leave it at that.

But it can also be a hugely rewarding experience, and as a seasoned tech presenter who’s been doing this for a while, aspiring speakers often ask me for advice on how to take it to the next level.

Before we get into the specifics, there are two things to bear in mind.

One: ask yourself why you want to do this. What does “the next level” mean for you? Are you looking to promote your consultancy, or your training courses, or your software products? Do you want to become a professional speaker and actually get paid to give talks? Are you doing it ‘cos you want to go places and meet people? Figure out what “success” looks like for you.

Two: be realistic about how much work is involved. It took me seven years to go from my first user group lightning talk, back 2008, to my first international conference. If you think you can hack together some code, write a talk about it, stick it on Sessionize and three months later you’re on your way to a major international event like NDC or Yow! or Devoxx… well, no. That’s not how this works. Strap in; it’s a long ride.

Year 1: Get Good

Write the talk. Write a talk nobody else could do; tell a story nobody else can tell. Figure out what your audience is going to learn, and why you’re the best person to teach them that. Then give it at local user group. It might go great. It might be a train wreck. Don’t worry. That’s one of the reasons user groups exist. Learn from the experience. Fix the demos. Fix the slides. If it was too short? Write some more. If it was too long? Cut something. Give it at another user group. Do it again. Do it again. Maybe write a second talk, shop that one around a couple of user groups too.

If you can’t find user groups, look on Meetup.com. Yes, it’s a horrible platform, but it works; search by topic, search by region, find groups that look like a good match for your content, and ask if they’re looking for speakers. They probably are.

Year 2: Get Seen

After user groups and meetups come the community conferences. Typically small, one-day events, with a few tracks, and usually free (or very cheap) to attend. For me, these were the DDD events _(that’s DDD as in Developers! Developers! Developers!, not to be confused with DDD as in Domain Driven Design), _a series of one-day free developer events around the UK, organised by volunteers, usually on a Saturday so people don’t have to take time off work. They bring in a good crowd, they’re a great way to get to know other presenters and people who are involved in tech events, and you’ll almost certainly meet a few people who are on the programme committees for the bigger conferences.

Events like this are your chance to get noticed. Turn up the day before, join the pre-conference dinner and drinks, introduce yourself. Yeah, it’s awkward when you don’t know anybody. There will be other people there who don’t know anybody and will appreciate you making the effort. Enjoy yourself, but don’t end up doing tequila shots in a karaoke bar at 3am. Not now. You’re there to give a talk, remember?

Go to the event. Spend the whole day there, do your talk, watch the other sessions. Communicate with the organisers. You don’t want their memorable impression of you to be a half-hour of panic and missed calls because one of their speakers has gone AWOL and nobody knows where they are.

Figure out how to keep in touch with the people you met. Join the Signal or WhatsApp group chat; if there isn’t one, create one. Follow them on LinkedIn, or Bluesky - be prepared to go where people are; don’t expect folks to join Mastodon just because that’s where you want to talk to them. That’s not how this works. If you really don’t want to play the social media game - and I can’t blame you - there’s always good old-fashioned email. A short email a week later saying “hey, thanks for having me” or “hey, I loved your session at DDD, let’s keep in touch” can pay off in a big way.

Finally, watch out for events that put video of their sessions online. Having a couple of YouTube links of you doing your thing in front of a live, appreciate audience can make all the difference when a programme committee is looking at a handful of talks and can only accept one of them.

Year 3: Get Accepted

You’ve got a couple of talks. You’ve delivered then enough times that you know they’re good *(and if they’re not good, make them good - or scrap them and write new ones)*. You know people. People know you. If somebody asks “hey, do we know anybody who could do a good session about $topic”, your name comes up. You’ve got a decent network of connections - group chats, LinkedIn, email addresses.

Now, find all the conferences in your field with an open Call for Papers (CfP), and get submitting. Dave Aronson over at codeasaur.us maintains a really useful list of CfPs which are closing soon . Check that regularly. Many events will cover your travel & hotel costs, although with sponsorship budgets drying up right across the industry that’s not as prevalent as it was a few years ago. If not, maybe you can persuade your employer to pay your travel - “hey, boss, if I can get a free ticket to this amazing conference with all these industry experts, do you think the company will pay my air fare & hotel?”

Lean on your network. What are people submitting to? Which events should you look out for? Which topics are getting a lot of traction (and which topics are not?)

Keep your content fresh. Write new talks. Keep giving them at user groups and community events.

Keep your submissions focused. 2-3 talks per event; don’t submit ten wildly different abstracts to the same conference in the hope one of them will get accepted. Every selection committee I’ve been on, if we see that, we assume the presenter hasn’t actually written *any* of them yet and is throwing everything they can think of into the mix and hoping one of them gets chosen. Not a great way to stand out. An open CFP at a big tech conference typically gets 20+ submissions for every available slot, which means if you reduce it to a numbers game, you’re submitting 20 talks for every one that gets accepted. Keep track of the numbers, and be objective about it.

Year 4: Get Bored.

It’s great fun doing this for a while… but it’s also exhausting. Some people hit it hard for a few years, do all the things, go to all the places, make a lot of great friends and happy memories, and then wake up one day and decide that’s enough. Some people do a few talks, tick it off their bucket list and decide that’s enough for them. Some settle into a gentle routine of 3-4 events they’ll do every year. And yes, some of us end up treating our calendars like a game of Tetris, juggling flights and trains and hotels and meetups and conferences and spending half the year on the the road and the other half writing talks and workshops and all the other things it’s hard to do when you’re at the airport.

That’s why you gotta figure out ahead of time what “success” looks like. If you’re doing it for fun, remember to have fun - and if you find you’re not enjoying it any more? Stop. If you’re doing it as promotion or marketing? Track your leads. Make sure it’s actually generating the attention and the revenue it’s supposed to. If you’re doing it for money, be mercenary: no pay, no play. Not every event is the same, of course. In a given year I’ll have some events that are fun, some that are lucrative, some that are running alongside workshops or training engagements. Just make sure you know which is which.

Finally: respect your audience. Whether you’re talking to five people at a meetup, fifty at a community event, or five thousand at a huge international conference: those people are the reason you get to to this. They have given up their time - and often a substantial amount of money - to hear what you have to say. They deserve your best shot, every time. If you find you’re bored, fed up, tired, running talks on autopilot or making mistakes because you just don’t care? It’s time to try something else - and remember, there’s a thousand aspiring speakers out there who would dearly love to take that spot instead of you.

Now get out there. Work hard, have fun, teach us awesome things, and if you ever want me to look over an abstract or a slide deck, drop me a line - [email protected] . I’d be happy to help.

Microsoft December 2025 Patch Tuesday fixes 3 zero-days, 57 flaws

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 18:38:00
Microsoft's December 2025 Patch Tuesday fixes 57 flaws, including one actively exploited and two publicly disclosed zero-day vulnerabilities. [...]...
Original Article

Patch Tuesday

Today is Microsoft's December 2025 Patch Tuesday, which fixes 57 flaws, including one actively exploited and two publicly disclosed zero-day vulnerabilities.

This Patch Tuesday also addresses three "Critical" remote code execution vulnerabilities.

The number of bugs in each vulnerability category is listed below:

  • 28 Elevation of Privilege Vulnerabilities
  • 19 Remote Code Execution Vulnerabilities
  • 4 Information Disclosure Vulnerabilities
  • 3 Denial of Service Vulnerabilities
  • 2 Spoofing Vulnerabilities

When BleepingComputer reports on Patch Tuesday security updates, we only count those released by Microsoft today. Therefore, the number of flaws does not include Microsoft Edge (15 flaws) and Mariner vulnerabilities fixed earlier this month.

To learn more about the non-security updates released today, you can review our dedicated articles on the Windows 11 KB5072033 & KB5071417 cumulative updates .

3 zero-days, two exploited

This month's Patch Tuesday fixes one actively exploited and two publicly disclosed zero-day vulnerabilities.

Microsoft classifies a zero-day flaw as publicly disclosed or actively exploited while no official fix is available.

The actively exploited zero-day is:

CVE-2025-62221 - Windows Cloud Files Mini Filter Driver Elevation of Privilege Vulnerability

Microsoft has patched an actively exploited privilege elevation vulnerability in the Windows Cloud Files Mini Filter Driver.

"Use after free in Windows Cloud Files Mini Filter Driver allows an authorized attacker to elevate privileges locally," explains Microsoft.

Microsoft says that successfully exploiting the flaw allows attackers to gain SYSTEM privileges.

Microsoft has attributed the flaw to Microsoft Threat Intelligence Center (MSTIC) & Microsoft Security Response Center (MSRC) but has not shared how the flaw was exploited.

The publicly disclosed zero-day flaws are:

CVE-2025-64671 - GitHub Copilot for Jetbrains Remote Code Execution Vulnerability

Microsoft has patched a publicly disclosed GitHub Copilot flaw that allows an attacker to execute commands locally.

"Improper neutralization of special elements used in a command ('command injection') in Copilot allows an unauthorized attacker to execute code locally," explains Microsoft.

Microsoft says the flaw can be exploited through a Cross Prompt Injection in untrusted files or MCP servers.

"Via a malicious Cross Prompt Inject in untrusted files or MCP servers, an attacker could execute additional commands by appending them to commands allowed in the user's terminal auto-approve setting," continued Microsoft.

Microsoft has attributed the flaw to Ari Marzuk , who recently disclosed the flaw as part of his " IDEsaster: A Novel Vulnerability Class in AI IDEs " report.

CVE-2025-54100 - PowerShell Remote Code Execution Vulnerability

Microsoft has patched a PowerShell vulnerability that could cause scripts embedded in a webpage to be executed when the page is retrieved using Invoke-WebRequest .

"Improper neutralization of special elements used in a command ('command injection') in Windows PowerShell allows an unauthorized attacker to execute code locally," explains Microsoft.

Microsoft has made a change that displays a warning when PowerShell uses 'Invoke-WebRequest,' prompting the user to add the -UseBasicParsing to prevent code execution.

Security Warning: Script Execution Risk
Invoke-WebRequest parses the content of the web page. Script code in the web page might be run when the page is parsed.
      RECOMMENDED ACTION:
      Use the -UseBasicParsing switch to avoid script code execution.
      Do you want to continue?
			```
 
For additional details, see [KB5074596: PowerShell 5.1: Preventing script execution from web content](https://support.microsoft.com/help/5072034).

Microsoft attributes this flaw to numerous researchers, including Justin Necke, DeadOverflow, Pēteris Hermanis Osipovs, Anonymous, Melih Kaan Yıldız , and Osman Eren Güneş .

Recent updates from other companies

Other vendors who released updates or advisories in December 2025 include:

The December 2025 Patch Tuesday Security Updates

Below is the complete list of resolved vulnerabilities in the December 2025 Patch Tuesday updates.

To access the full description of each vulnerability and the systems it affects, you can view the full report here .

Tag CVE ID CVE Title Severity
Application Information Services CVE-2025-62572 Application Information Service Elevation of Privilege Vulnerability Important
Azure Monitor Agent CVE-2025-62550 Azure Monitor Agent Remote Code Execution Vulnerability Important
Copilot CVE-2025-64671 GitHub Copilot for Jetbrains Remote Code Execution Vulnerability Important
Microsoft Brokering File System CVE-2025-62569 Microsoft Brokering File System Elevation of Privilege Vulnerability Important
Microsoft Brokering File System CVE-2025-62469 Microsoft Brokering File System Elevation of Privilege Vulnerability Important
Microsoft Edge (Chromium-based) CVE-2025-13634 Chromium: CVE-2025-13634 Inappropriate implementation in Downloads Unknown
Microsoft Edge (Chromium-based) CVE-2025-13721 Chromium: CVE-2025-13721 Race in v8 Unknown
Microsoft Edge (Chromium-based) CVE-2025-13630 Chromium: CVE-2025-13630 Type Confusion in V8 Unknown
Microsoft Edge (Chromium-based) CVE-2025-13631 Chromium: CVE-2025-13631 Inappropriate implementation in Google Updater Unknown
Microsoft Edge (Chromium-based) CVE-2025-13632 Chromium: CVE-2025-13632 Inappropriate implementation in DevTools Unknown
Microsoft Edge (Chromium-based) CVE-2025-13633 Chromium: CVE-2025-13633 Use after free in Digital Credentials Unknown
Microsoft Edge (Chromium-based) CVE-2025-13638 Chromium: CVE-2025-13638 Use after free in Media Stream Unknown
Microsoft Edge (Chromium-based) CVE-2025-13639 Chromium: CVE-2025-13639 Inappropriate implementation in WebRTC Unknown
Microsoft Edge (Chromium-based) CVE-2025-13640 Chromium: CVE-2025-13640 Inappropriate implementation in Passwords Unknown
Microsoft Edge (Chromium-based) CVE-2025-13637 Chromium: CVE-2025-13637 Inappropriate implementation in Downloads Unknown
Microsoft Edge (Chromium-based) CVE-2025-13720 Chromium: CVE-2025-13720 Bad cast in Loader Unknown
Microsoft Edge (Chromium-based) CVE-2025-13635 Chromium: CVE-2025-13635 Inappropriate implementation in Downloads Unknown
Microsoft Edge (Chromium-based) CVE-2025-13636 Chromium: CVE-2025-13636 Inappropriate implementation in Split View Unknown
Microsoft Edge for iOS CVE-2025-62223 Microsoft Edge (Chromium-based) for Mac Spoofing Vulnerability Low
Microsoft Exchange Server CVE-2025-64666 Microsoft Exchange Server Elevation of Privilege Vulnerability Important
Microsoft Exchange Server CVE-2025-64667 Microsoft Exchange Server Spoofing Vulnerability Important
Microsoft Graphics Component CVE-2025-64670 Windows DirectX Information Disclosure Vulnerability Important
Microsoft Office CVE-2025-62554 Microsoft Office Remote Code Execution Vulnerability Critical
Microsoft Office CVE-2025-62557 Microsoft Office Remote Code Execution Vulnerability Critical
Microsoft Office Access CVE-2025-62552 Microsoft Access Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62560 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62563 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62561 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62564 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62553 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Excel CVE-2025-62556 Microsoft Excel Remote Code Execution Vulnerability Important
Microsoft Office Outlook CVE-2025-62562 Microsoft Outlook Remote Code Execution Vulnerability Critical
Microsoft Office SharePoint CVE-2025-64672 Microsoft SharePoint Server Spoofing Vulnerability Important
Microsoft Office Word CVE-2025-62558 Microsoft Word Remote Code Execution Vulnerability Important
Microsoft Office Word CVE-2025-62559 Microsoft Word Remote Code Execution Vulnerability Important
Microsoft Office Word CVE-2025-62555 Microsoft Word Remote Code Execution Vulnerability Important
Storvsp.sys Driver CVE-2025-64673 Windows Storage VSP Driver Elevation of Privilege Vulnerability Important
Windows Camera Frame Server Monitor CVE-2025-62570 Windows Camera Frame Server Monitor Information Disclosure Vulnerability Important
Windows Client-Side Caching (CSC) Service CVE-2025-62466 Windows Client-Side Caching Elevation of Privilege Vulnerability Important
Windows Cloud Files Mini Filter Driver CVE-2025-62457 Windows Cloud Files Mini Filter Driver Elevation of Privilege Vulnerability Important
Windows Cloud Files Mini Filter Driver CVE-2025-62454 Windows Cloud Files Mini Filter Driver Elevation of Privilege Vulnerability Important
Windows Cloud Files Mini Filter Driver CVE-2025-62221 Windows Cloud Files Mini Filter Driver Elevation of Privilege Vulnerability Important
Windows Common Log File System Driver CVE-2025-62470 Windows Common Log File System Driver Elevation of Privilege Vulnerability Important
Windows Defender Firewall Service CVE-2025-62468 Windows Defender Firewall Service Information Disclosure Vulnerability Important
Windows DirectX CVE-2025-62463 DirectX Graphics Kernel Denial of Service Vulnerability Important
Windows DirectX CVE-2025-62465 DirectX Graphics Kernel Denial of Service Vulnerability Important
Windows DirectX CVE-2025-62573 DirectX Graphics Kernel Elevation of Privilege Vulnerability Important
Windows DWM Core Library CVE-2025-64679 Windows DWM Core Library Elevation of Privilege Vulnerability Important
Windows DWM Core Library CVE-2025-64680 Windows DWM Core Library Elevation of Privilege Vulnerability Important
Windows Hyper-V CVE-2025-62567 Windows Hyper-V Denial of Service Vulnerability Important
Windows Installer CVE-2025-62571 Windows Installer Elevation of Privilege Vulnerability Important
Windows Message Queuing CVE-2025-62455 Microsoft Message Queuing (MSMQ) Elevation of Privilege Vulnerability Important
Windows PowerShell CVE-2025-54100 PowerShell Remote Code Execution Vulnerability Important
Windows Projected File System CVE-2025-62464 Windows Projected File System Elevation of Privilege Vulnerability Important
Windows Projected File System CVE-2025-55233 Windows Projected File System Elevation of Privilege Vulnerability Important
Windows Projected File System CVE-2025-62462 Windows Projected File System Elevation of Privilege Vulnerability Important
Windows Projected File System CVE-2025-62467 Windows Projected File System Elevation of Privilege Vulnerability Important
Windows Projected File System Filter Driver CVE-2025-62461 Windows Projected File System Elevation of Privilege Vulnerability Important
Windows Remote Access Connection Manager CVE-2025-62474 Windows Remote Access Connection Manager Elevation of Privilege Vulnerability Important
Windows Remote Access Connection Manager CVE-2025-62472 Windows Remote Access Connection Manager Elevation of Privilege Vulnerability Important
Windows Resilient File System (ReFS) CVE-2025-62456 Windows Resilient File System (ReFS) Remote Code Execution Vulnerability Important
Windows Routing and Remote Access Service (RRAS) CVE-2025-62549 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important
Windows Routing and Remote Access Service (RRAS) CVE-2025-62473 Windows Routing and Remote Access Service (RRAS) Information Disclosure Vulnerability Important
Windows Routing and Remote Access Service (RRAS) CVE-2025-64678 Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability Important
Windows Shell CVE-2025-62565 Windows File Explorer Elevation of Privilege Vulnerability Important
Windows Shell CVE-2025-64661 Windows Shell Elevation of Privilege Vulnerability Important
Windows Shell CVE-2025-64658 Windows File Explorer Elevation of Privilege Vulnerability Important
Windows Storage VSP Driver CVE-2025-59517 Windows Storage VSP Driver Elevation of Privilege Vulnerability Important
Windows Storage VSP Driver CVE-2025-59516 Windows Storage VSP Driver Elevation of Privilege Vulnerability Important
Windows Win32K - GRFX CVE-2025-62458 Win32k Elevation of Privilege Vulnerability Important

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Paramount Skydance Makes Hostile Takeover Bid for Warner Bros. Discovery

Daring Fireball
www.wsj.com
2025-12-09 18:37:33
The Wall Street Journal yesterday: Paramount launched a $77.9 billion hostile takeover offer for Warner Bros. Discovery Monday, taking its case for acquiring the storied entertainment company directly to shareholders just days after Warner agreed to a deal with Netflix. Paramount, run by David ...
Original Article

Please enable JS and disable any ad blocker

Fortinet warns of critical FortiCloud SSO login auth bypass flaws

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 18:36:48
Fortinet has released security updates to address two critical vulnerabilities in FortiOS, FortiWeb, FortiProxy, and FortiSwitchManager that could allow attackers to bypass FortiCloud SSO authentication. [...]...
Original Article

Fortinet

Fortinet has released security updates to address two critical vulnerabilities in FortiOS, FortiWeb, FortiProxy, and FortiSwitchManager that could allow attackers to bypass FortiCloud SSO authentication.

Threat actors can exploit the two security flaws tracked as CVE-2025-59718 (FortiOS, FortiProxy, FortiSwitchManager) and CVE-2025-59719 (FortiWeb) by abusing improper verification of cryptographic signature weaknesses in vulnerable products via a maliciously crafted SAML message.

However, as Fortinet explained in an advisory published today, the vulnerable FortiCloud feature is not enabled by default when the device is not FortiCare-registered.

"Please note that the FortiCloud SSO login feature is not enabled in default factory settings," Fortinet said. "However, when an administrator registers the device to FortiCare from the device's GUI, unless the administrator disables the toggle switch 'Allow administrative login using FortiCloud SSO' in the registration page, FortiCloud SSO login is enabled upon registration."

To protect their systems against attacks exploiting these vulnerabilities, admins are advised to temporarily disable the FortiCloud login feature (if enabled) until they upgrade to a non-vulnerable version.

To disable FortiCloud login, navigate to System -> Settings and switch "Allow administrative login using FortiCloud SSO" to Off. Alternatively, you can run the following command from the command-line interface:

config system global
set admin-forticloud-sso-login disable
end

Today, the company also patched an unverified password change vulnerability ( CVE-2025-59808 ) that allows attackers "who gained access to a victim's user account to reset the account credentials without being prompted for the account's password," and another one that can let threat actors authenticate using the hash in place of the password ( CVE-2025-64471 ).

Fortinet security vulnerabilities are frequently exploited (often as zero days) in both ransomware and cyber-espionage attacks.

For instance, Fortinet disclosed in February that the Chinese Volt Typhoon hacking group backdoored a Dutch Ministry of Defence military network using custom Coathanger remote access trojan (RAT) malware after exploiting two FortiOS SSL VPN flaws (CVE-2023-27997 and CVE-2022-42475).

More recently, in August, Fortinet patched a command injection vulnerability (CVE-2025-25256) with publicly available exploit code in its FortiSIEM security monitoring solution, one day after cybersecurity company GreyNoise reported a massive spike in brute-force attacks targeting Fortinet SSL VPNs.

In November, Fortinet warned of a FortiWeb zero-day (CVE-2025-58034) that was actively exploited in attacks, one week after confirming that it had silently patched another massively exploited FortiWeb zero-day (CVE-2025-64446).

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Windows 11 KB5072033 & KB5071417 cumulative updates released

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 18:31:38
Microsoft has released Windows 11 KB5072033 and KB5071417 cumulative updates for versions 25H2/24H2 and 23H2 to fix security vulnerabilities, bugs, and add new features. [...]...
Original Article

Windows 11

Microsoft has released Windows 11 KB5072033 and KB5071417 cumulative updates for versions 25H2/24H2 and 23H2 to fix security vulnerabilities, bugs, and add new features.

Today's updates are mandatory as they contain the December 2025 Patch Tuesday security patches for vulnerabilities discovered in previous months.

You can install today's update by going to Start > Settings > Windows Update and clicking on 'Check for Updates .'

December update
December 2025 Update

You can also manually download and install the update from the Microsoft Update Catalog .

This is the third 'Patch Tuesday' release for version 25H2, but as it's based on version 24H2, there are no exclusive or special changes. You'll get the same fixes across the two versions of Windows 11.

What's new in the December 2025 Patch Tuesday update

After installing today's security updates, Windows 11 25H2 ( KB5072033 ) will have its build number changed to Build 26200.7462 (or 26100.7462  in case of 24H2), and 23H2 ( KB5068865 ) will be changed to 226x1.6050.

The biggest highlight of the update is the dark mode support for dialogs in File Explorer and new Virtual Workspaces settings for managing features like Sandbox:

  • [Advanced Settings] New! You can now turn on Virtual Workspaces in Advanced Settings . Virtual Workspaces allow you to enable or disable virtual environments such as Hyper-V and Windows Sandbox. To access Virtual Workspaces, go to Settings > Systems > Advanced .

  • [Desktop Spotlight] New! When Windows Spotlight is set as your desktop background ( Settings > Personalization > Background ), the context menu now includes two options: Learn more about this background and Next desktop background .

  • [File Explorer]

    • New! File Explorer now offers a more consistent dark mode experience. Key actions such as copy , move , and delete dialogs have been updated in both default and expanded views. Progress bars and chart views now align with the dark theme to improve readability. Confirmation dialogs for actions like skip , override , and file selection , along with multiple error dialogs, have also been refreshed for a cohesive look.

    • New! The simplified File Explorer context menu for easier navigation. Common actions like Share , Copy , and Move now appear in a single organized menu. This change removes clutter and makes commands easier to find, whether you use a mouse, keyboard, or assistive technology. This change will initially be available to a small group of devices as we evaluate the best approach.

    • [Gaming] New! The full-screen experience (FSE) is now available on more Windows 11 handheld devices after its initial launch on ASUS ROG Ally and ROG Ally X . FSE gives you a console-style interface with the Xbox app, making handheld gaming clean and distraction-free. It improves performance by minimizing background tasks, so gameplay stays smooth and responsive. To turn it on, go to Settings > Gaming > Full screen experience , and set Xbox as your home app. You can open FSE from Task View or Game Bar ​​​​​​, or configure your handheld to enter full-screen experience on startup.

  • [Input] New! Pens that support haptic feedback—small vibrations that simulate touch—now provide tactile responses when you interact with the Windows interface. For example, you might feel vibrations when hovering over the close button or when snapping and resizing windows .

  • [Keyboard] New! Keyboard backlight performance has improved on supported HID-compliant keyboards. Compatible keyboards display keys clearly in low-light environments, and the backlight adjusts to help conserve power.

  • [Mobile Device Settings] New! You can add and manage your mobile devices in Settings under Bluetooth & Devices Mobile Devices . On this page you can manage features such as using your device as a connected camera or accessing your device’s files in File Explorer.

  • [OneDrive] New! The new OneDrive icon now appears in Settings > Accounts > Homepage .

  • [Recovery] New ! Quick Machine Recovery (QMR) now runs a one-time scan on PCs with settings quick machine recovery and automatically check for solutions are both turned on. If a fix isn’t available immediately, QMR directs you to the best recovery options to get your PC running again

  • [Settings]

    • New! Keyboard settings for "character repeat delay and rate", and "cursor blink rate", have moved from Control Panel to Settings. You can now find "character repeat delay and rate" under Settings > Accessibility > Keyboard , and "cursor blink rate" under Settings > Accessibility > Text cursor .

    • New! Settings now has an updated layout that organizes device details and related options in one place. You can quickly access features such as Storage settings for faster navigation.

  • [Taskbar & System Tray]

    • New! Animations for app groups on the taskbar have been updated. When you hover over app groups on the taskbar, you can see the preview transition as you slide between apps.

    • New! You can now share an open app window with Copilot directly from the taskbar. When you hover over an open app, you’ll see the option to share its window with Copilot—similar to sharing in Microsoft Teams . Hover over the app icon, select Share with Copilot to start a conversation. Copilot Vision will analyze the content you shared and provide insights based on what’s displayed in that app.

    • Fixed: The Automatically hide the taskbar setting might unexpectedly turn off, after seeing a message saying a toolbar is already hidden on this side of your screen .

  • [Widgets]

    • New! You can choose a default dashboard in Widget Board settings. When live weather is showing, the Widget Board opens the first dashboard in your navigation bar instead of the most recently used one, making the experience consistent. To set your default dashboard, open the full-page Widgets Settings by selecting Settings in the navigation bar , then move your preferred dashboard to the top.

    • New! Dashboard icons in the Widget navigation bar now show numbers that correspond to the number of alerts from that dashboard. Navigation bar badges clear automatically when you leave a dashboard, making it easy to track what’s new.

  • [Windows Share]

    • New! 2 Drag tray now supports multi-file sharing, shows more relevant apps, and makes it easy to move files to a chosen folder.

    • New! 2 You can now turn drag tray on or off from Settings > System > Nearby sharing .

    • New! 3 You can share OneDrive files through other apps. The options appear under Share using when you select the Copy link . You must be signed into your Microsoft account.

  • [Display and Graphics]

    • Improved: Performance has been improved when apps query monitors for their full list of supported modes. When this happens it could previously lead to a momentary stutter on very high-resolution monitors. This work should help prevent and reduce stuttering in these scenarios.

  • [Game Pass] References to Game Pass in Settings are now modified to reflect updated branding and benefits.

  • [Start menu] Improved: For users with the new Start menu , the Windows Search panel now matches the new Start menu in size. This update aims to create a smoother transition when searching.

All these features are rolling out gradually, so it'll be a while before they show up on your PC.

At the moment, Microsoft is not aware of new issues with this month's Patch Tuesday.

Microsoft also confirmed that it will not be releasing optional updates in December, as most of the engineers will be away for the holidays. Updates will resume in January.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Congress Quietly Kills Military “Right to Repair,” Allowing Corporations to Cash In on Fixing Broken Products

Intercept
theintercept.com
2025-12-09 18:22:54
Both chambers included Pentagon budget provisions for a right to repair, but they died after defense industry meetings on Capitol Hill. The post Congress Quietly Kills Military “Right to Repair,” Allowing Corporations to Cash In on Fixing Broken Products appeared first on The Intercept....
Original Article

The idea of a “right to repair” — a requirement that companies facilitate consumers’ repairs, maintenance, and modification of products — is extremely popular, even winning broad, bipartisan support in Congress. That could not, however, save it from the military–industrial complex.

Lobbyists succeeded in killing part of the National Defense Authorization Act that would have given service members the right to fix their equipment in the field without having to worry about military suppliers’ intellectual property.

“Defense contractors have a lot of influence on Capitol Hill.”

The decision to kill the popular proposal was made public Sunday after a closed-door conference of top congressional officials, including defense committee chairs, along with Speaker Mike Johnson, R-La., and Senate Majority Leader John Thune, R-S.D.

Those meetings were secret, but consumer advocates say they have a pretty good idea of what happened.

“It’s pretty clear that defense contractors opposed the right-to-repair provisions, and they pressed hard to have them stripped out of the final bill,” said Isaac Bowers, the federal legislative director at U.S. PIRG. “All we can say is that defense contractors have a lot of influence on Capitol Hill.”

The idea had drawn bipartisan support in both the House and Senate, which each passed their own versions of the proposal.

Under one version, co-sponsored by Sen. Elizabeth Warren, D-Mass., and Sen. Tim Sheehy, R-Mt., defense companies would have been required to supply the information needed for repairs — such as technical data, maintenance manuals, engineering drawings, and lists of replacement parts — as a condition of Pentagon contracts.

The idea was that no service member would ever be left waiting on a contractor to fly in from Norway to repair a simple part — which once happened — or, in another real-life scenario , told by the manufacturer to buy a new CT scanner in a combat zone because one malfunctioned.

Instead of worrying about voiding a warranty, military personnel in the field could use a 3D printer or elbow grease to fix a part.

“The military is a can-do operation,” Bowers said. “Service members can and should be able to repair their own equipment, and this will save costs if they can do it upfront and on time and on their schedule.”

“Contractor Profiteering”

Operations and maintenance costs are typically the biggest chunk of the Pentagon’s budget, at 40 percent. That is in large part because the military often designs new weapons at the same time it builds them, according to Julia Gledhill, a research analyst for the national security reform program at the Stimson Center.

“We do see concurrent development, wherein the military is designing and building a system at the same time,” Gledhill said on a webinar hosted by the nonprofit Taxpayers for Common Sense on Tuesday. “That, turns out, doesn’t work very well. It means that you do discover design flaws, what the DOD would characterize as defects, and then you spend a whole lot of money trying to fix them.”

For the defense industry, however, the proposal threatened a key profit stream. Once companies sell hardware and software to the Pentagon, they can keep making money by forcing the government to hire them for repairs.

Defense lobbyists pushed back hard against the proposal when it arose in the military budgeting process. The CEO of the Aerospace Industries Association claimed that the legislation could “cripple the very innovation on which our warfighters rely.”

The contractors’ argument was that inventors would not sell their products to the Pentagon if they knew they had to hand over their trade secrets as well.

In response, Warren wrote an unusual letter last month calling out one trade group, the National Defense Industrial Association.

“NDIA’s opposition to these commonsense reforms is a dangerous and misguided attempt,” Warren said , “to protect an unacceptable status quo of giant contractor profiteering that is expensive for taxpayers and presents a risk to military readiness and national security.”

As a piece of legislation, the right to repair has likely died until next year’s defense budget bill process. The notion could be imposed in the form of internal Pentagon policies, but it would be a less of a mandate: Such policies can be more easily waived.

The secretaries of the Army, Navy, and Air Force have all expressed some degree of support for the idea, and Defense Secretary Pete Hegseth has urged the branches to include “right to repair” provisions in new contracts going forward — though, for now, it’s just a suggestion rather than legal requirement.

The stack circuitry of the Intel 8087 floating point chip, reverse-engineered

Hacker News
www.righto.com
2025-12-09 18:16:44
Comments...
Original Article

Early microprocessors were very slow when operating with floating-point numbers. But in 1980, Intel introduced the 8087 floating-point coprocessor, performing floating-point operations up to 100 times faster. This was a huge benefit for IBM PC applications such as AutoCAD, spreadsheets, and flight simulators. The 8087 was so effective that today's computers still use a floating-point system based on the 8087. 1

The 8087 was an extremely complex chip for its time, containing somewhere between 40,000 and 75,000 transistors, depending on the source. 2 To explore how the 8087 works, I opened up a chip and took numerous photos of the silicon die with a microscope. Around the edges of the die, you can see the hair-thin bond wires that connect the chip to its 40 external pins. The complex patterns on the die are formed by its metal wiring, as well as the polysilicon and silicon underneath. The bottom half of the chip is the "datapath", the circuitry that performs calculations on 80-bit floating point values. At the left of the datapath, a constant ROM holds important constants such as π. At the right are the eight registers that form the stack, along with the stack control circuitry.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled. The die is 5mm×6mm.  Click for a larger image.

Die of the Intel 8087 floating point unit chip, with main functional blocks labeled. The die is 5mm×6mm. Click for a larger image.

The chip's instructions are defined by the large microcode ROM in the middle. This ROM is very unusual; it is semi-analog, storing two bits per transistor by using four transistor sizes. To execute a floating-point instruction, the 8087 decodes the instruction and the microcode engine starts executing the appropriate micro-instructions from the microcode ROM. The decode circuitry to the right of the ROM generates the appropriate control signals from each micro-instruction. The bus registers and control circuitry handle interactions with the main 8086 processor and the rest of the system. Finally, the bias generator uses a charge pump to create a negative voltage to bias the chip's substrate, the underlying silicon.

The stack registers and control circuitry (in red above) are the subject of this blog post. Unlike most processors, the 8087 organizes its registers in a stack, with instructions operating on the top of the stack. For instance, the square root instruction replaces the value on the top of the stack with its square root. You can also access a register relative to the top of the stack, for instance, adding the top value to the value two positions down from the top. The stack-based architecture was intended to improve the instruction set, simplify compiler design, and make function calls more efficient, although it didn't work as well as hoped.

The stack on the 8087. From The 8087 Primer, page 60.

The stack on the 8087. From The 8087 Primer , page 60.

The diagram above shows how the stack operates. The stack consists of eight registers, with the Stack Top (ST) indicating the current top of the stack. To push a floating-point value onto the stack, the Stack Top is decremented and then the value is stored in the new top register. A pop is performed by copying the value from the stack top and then incrementing the Stack Top. In comparison, most processors specify registers directly, so register 2 is always the same register.

The registers

The stack registers occupy a substantial area on the die of the 8087 because floating-point numbers take many bits. A floating-point number consists of a fractional part (sometimes called the mantissa or significand), along with the exponent part; the exponent allows floating-point numbers to cover a range from extremely small to extremely large. In the 8087, floating-point numbers are 80 bits: 64 bits of significand, 15 bits of exponent, and a sign bit. An 80-bit register was very large in the era of 8-bit or 16-bit computers; the eight registers in the 8087 would be equivalent to 40 registers in the 8086 processor.

The registers in the 8087 form an 8×80 grid of cells. The close-up shows an 8×8 block. I removed the metal layer with acid to reveal the underlying silicon circuitry.

The registers in the 8087 form an 8×80 grid of cells. The close-up shows an 8×8 block. I removed the metal layer with acid to reveal the underlying silicon circuitry.

The registers store each bit in a static RAM cell. Each cell has two inverters connected in a loop. This circuit forms a stable feedback loop, with one inverter on and one inverter off. Depending on which inverter is on, the circuit stores a 0 or a 1. To write a new value into the circuit, one of the lines is pulled low, flipping the loop into the desired state. The trick is that each inverter uses a very weak transistor to pull the output high, so its output is easily overpowered to change the state.

Two inverters in a loop can store a 0 or a 1.

Two inverters in a loop can store a 0 or a 1.

These inverter pairs are arranged in an 8 × 80 grid that implements eight words of 80 bits. Each of the 80 rows has two bitlines that provide access to a bit. The bitlines provide both read and write access to a bit; the pair of bitlines allows either inverter to be pulled low to store the desired bit value. Eight vertical wordlines enable access to one word, one column of 80 bits. Each wordline turns on 160 pass transistors, connecting the bitlines to the inverters in the selected column. Thus, when a wordline is enabled, the bitlines can be used to read or write that word.

Although the chip looks two-dimensional, it actually consists of multiple layers. The bottom layer is silicon. The pinkish regions below are where the silicon has been "doped" to change its electrical properties, making it an active part of the circuit. The doped silicon forms a grid of horizontal and vertical wiring, with larger doped regions in the middle. On top of the silicon, polysilicon wiring provides two functions. First, it provides a layer of wiring to connect the circuit. But more importantly, when polysilicon crosses doped silicon, it forms a transistor. The polysilicon provides the gate, turning the transistor on and off. In this photo, the polysilicon is barely visible, so I've highlighted part of it in red. Finally, horizontal metal wires provide a third layer of interconnecting wiring. Normally, the metal hides the underlying circuitry, so I removed the metal with acid for this photo. I've drawn blue lines to represent the metal layer. Contacts provide connections between the various layers.

A close-up of a storage cell in the registers. The metal layer and most of the polysilicon have been removed to show the underlying silicon.

A close-up of a storage cell in the registers. The metal layer and most of the polysilicon have been removed to show the underlying silicon.

The layers combine to form the inverters and selection transistors of a memory cell, indicated with the dotted line below. There are six transistors (yellow), where polysilicon crosses doped silicon. Each inverter has a transistor that pulls the output low and a weak transistor to pull the output high. When the word line (vertical polysilicon) is active, it connects the selected inverters to the bit lines (horizontal metal) through the two selection transistors. This allows the bit to be read or written.

The function of the circuitry in a storage cell.

The function of the circuitry in a storage cell.

Each register has two tag bits associated with it, an unusual form of metadata to indicate if the register is empty, contains zero, contains a valid value, or contains a special value such as infinity. The tag bits are used to optimize performance internally and are mostly irrelevant to the programmer. As well as being accessed with a register, the tag bits can be accessed in parallel as a 16-bit "Tag Word". This allows the tags to be saved or loaded as part of the 8087's state, for instance, during interrupt handling.

The decoder

The decoder circuit, wedged into the middle of the register file, selects one of the registers. A register is specified internally with a 3-bit value. The decoder circuit energizes one of the eight register select lines based on this value.

The decoder circuitry is straightforward: it has eight 3-input NOR gates to match one of the eight bit patterns. The select line is then powered through a high-current driver that uses large transistors. (In the photo below, you can compare the large serpentine driver transistors to the small transistors in a bit cell.)

The decoder circuitry has eight similar blocks to drive the eight select lines.

The decoder circuitry has eight similar blocks to drive the eight select lines.

The decoder has an interesting electrical optimization. As shown earlier, the register select lines are eight polysilicon lines running vertically, the length of the register file. Unfortunately, polysilicon has fairly high resistance, better than silicon but much worse than metal. The problem is that the resistance of a long polysilicon line will slow down the system. That is, the capacitance of transistor gates in combination with high resistance causes an RC (resistive-capacitive) delay in the signal.

The solution is that the register select lines also run in the metal layer, a second set of lines immediately to the right of the register file. These lines branch off from the register file about 1/3 of the way down, run to the bottom, and then connect back to the polysilicon select lines at the bottom. This reduces the maximum resistance through a select line, increasing the speed.

A diagram showing how 8 metal lines run parallel to the main select lines. The register file is much taller than shown; the middle has been removed to make the diagram fit.

A diagram showing how 8 metal lines run parallel to the main select lines. The register file is much taller than shown; the middle has been removed to make the diagram fit.

The stack control circuitry

A stack needs more control circuitry than a regular register file, since the circuitry must keep track of the position of the top of the stack. 3 The control circuitry increments and decrements the top of stack (TOS) pointer as values are pushed or popped (purple). 4 Moreover, an 8087 instruction can access a register based on its offset, for instance the third register from the top. To support this, the control circuitry can temporarily add an offset to the top of stack position (green). A multiplexer (red) selects either the top of stack or the adder output, and feeds it to the decoder (blue), which selects one of the eight stack registers in the register file (yellow), as described earlier.

The register stack in the 8087. Adapted from Patent USRE33629E. I don't know what the GRX field is. I also don't know why this shows a subtractor and not an adder.

The register stack in the 8087. Adapted from Patent USRE33629E . I don't know what the GRX field is. I also don't know why this shows a subtractor and not an adder.

The physical implementation of the stack circuitry is shown below. The logic at the top selects the stack operation based on the 16-bit micro-instruction. 5 Below that are the three latches that hold the top of stack value. (The large white squares look important, but they are simply "jumpers" from the ground line to the circuitry, passing under metal wires.)

The stack control circuitry. The blue regions on the right are oxide residue that remained when I dissolved the metal rail for the 5V power.

The stack control circuitry. The blue regions on the right are oxide residue that remained when I dissolved the metal rail for the 5V power.

The three-bit adder is at the bottom, along with the multiplexer. You might expect the adder to use a simple "full adder" circuit. Instead, it is a faster carry-lookahead adder. I won't go into details here, but the summary is that at each bit position, an AND gate produces a Carry Generate signal while an XOR gate produces a Carry Propagate signal. Logic gates combine these signals to produce the output bits in parallel, avoiding the slowdown of the carry rippling through the bits.

The incrementer/decrementer uses a completely different approach. Each of the three bits uses a toggle flip-flop. A few logic gates determine if each bit should be toggled or should keep its previous value. For instance, when incrementing, the top bit is toggled if the lower bits are 11 (e.g. incrementing from 011 to 100). For decrementing, the top bit is toggled if the lower bits are 00 (e.g. 100 to 011). Simpler logic determines if the middle bit should be toggled. The bottom bit is easier, toggling every time whether incrementing or decrementing.

The schematic below shows the circuitry for one bit of the stack. Each bit is implemented with a moderately complicated flip-flop that can be cleared, loaded with a value, or toggled, based on control signals from the microcode. The flip-flop is constructed from two set-reset (SR) latches. Note that the flip-flop outputs are crossed when fed back to the input, providing the inversion for the toggle action. At the right, the multiplexer selects either the register value or the sum from the adder (not shown), generating the signals to the decoder.

Schematic of one bit of the stack.

Schematic of one bit of the stack.

Drawbacks of the stack approach

According to the designers of the 8087, 7 the main motivation for using a stack rather than a flat register set was that instructions didn't have enough bits to address multiple register operands. In addition, a stack has "advantages over general registers for expression parsing and nested function calls." That is, a stack works well for a mathematical expression since sub-expressions can be evaluated on the top of the stack. And for function calls, you avoid the cost of saving registers to memory, since the subroutine can use the stack without disturbing the values underneath. At least that was the idea.

The main problem is "stack overflow". The 8087's stack has eight entries, so if you push a ninth value onto the stack, the stack will overflow. Specifically, the top-of-stack pointer will wrap around, obliterating the bottom value on the stack. The 8087 is designed to detect a stack overflow using the register tags: pushing a value to a non-empty register triggers an invalid operation exception. 6

The designers expected that stack overflow would be rare and could be handled by the operating system (or library code). After detecting a stack overflow, the software should dump the existing stack to memory to provide the illusion of an infinite stack. Unfortunately, bad design decisions made it difficult "both technically and commercially" to handle stack overflow.

One of the 8087's designers (Kahan) attributes the 8087's stack problems to the time difference between California, where the designers lived, and Israel, where the 8087 was implemented. Due to a lack of communication, each team thought the other was implementing the overflow software. It wasn't until the 8087 was in production that they realized that "it might not be possible to handle 8087 stack underflow/overflow in a reasonable way. It's not impossible, just impossible to do it in a reasonable way."

As a result, the stack was largely a problem rather than a solution. Most 8087 software saved the full stack to memory before performing a function call, creating more memory traffic. Moreover, compilers turned out to work better with regular registers than a stack, so compiler writers awkwardly used the stack to emulate regular registers. The GCC compiler reportedly needs 3000 lines of extra code to support the x87 stack.

In the 1990s, Intel introduced a new floating-point system called SSE , followed by AVX in 2011. These systems use regular (non-stack) registers and provide parallel operations for higher performance, making the 8087's stack instructions largely obsolete.

The success of the 8087

At the start, Intel was unenthusiastic about producing the 8087, viewing it as unlikely to be a success. John Palmar, a principal architect of the chip, had little success convincing skeptical Intel management that the market for the 8087 was enormous. Eventually, he said, "I'll tell you what. I'll relinquish my salary, provided you'll write down your number of how many you expect to sell, then give me a dollar for every one you sell beyond that." 7 Intel didn't agree to the deal—which would have made a fortune for Palmer—but they reluctantly agreed to produce the chip.

Intel's Santa Clara engineers shunned the 8087, considering it unlikely to work: the 8087 would be two to three times more complex than the 8086, with a die so large that a wafer might not have a single working die. Instead, Rafi Nave, at Intel's Israel site, took on the risky project: “Listen, everybody knows it's not going to work, so if it won't work, I would just fulfill their expectations or their assessment. If, by chance, it works, okay, then we'll gain tremendous respect and tremendous breakthrough on our abilities.”

A small team of seven engineers developed the 8087 in Israel. They designed the chip on Mylar sheets: a millimeter on Mylar represented a micron on the physical chip. The drawings were then digitized on a Calma system by clicking on each polygon to create the layout. When the chip was moved into production, the yield was very low but better than feared: two working dies per four-inch wafer.

The 8087 ended up being a large success, said to have been Intel's most profitable product line at times. The success of the 8087 (along with the 8088) cemented the reputation of Intel Israel, which eventually became Israel's largest tech employer. The benefits of floating-point hardware proved to be so great that Intel integrated the floating-point unit into later processors starting with the 80486 (1989). Nowadays, most modern computers, from cellphones to mainframes, provide floating point based on the 8087, so I consider the 8087 one of the most influential chips ever created.

For more, follow me on Bluesky ( @righto.com ), Mastodon ( @ [email protected] ), or RSS . I wrote some articles about the 8087 a few years ago, including the die , the ROM , the bit shifter , and the constants , so you may have seen some of this material before.

Notes and references

"The Matilda Effect": Pioneering Women Scientists Written Out of Science History

Hacker News
www.openculture.com
2025-12-09 17:57:55
Comments...
Original Article

Pho­to via Wiki­me­dia Com­mons

The his­to­ry of sci­ence, like most every his­to­ry we learn, comes to us as a pro­ces­sion of great, almost exclu­sive­ly white, men, unbro­ken but for the occa­sion­al token woman—well-deserving of her hon­ors but seem­ing­ly anom­alous nonethe­less. “If you believe the his­to­ry books,” notes the Time­line series The Matil­da Effect , “sci­ence is a guy thing. Dis­cov­er­ies are made by men, which spur fur­ther inno­va­tion by men, fol­lowed by acclaim and prizes for men. But too often, there is an unsung woman genius who deserves just as much cred­it” and who has been over­shad­owed by male col­leagues who grabbed the glo­ry.

In 1993, Cor­nell Uni­ver­si­ty his­to­ri­an of sci­ence Mar­garet Rossiter dubbed the denial of recog­ni­tion to women sci­en­tists “the Matil­da effect,” for suf­frag­ist and abo­li­tion­ist Matil­da Joslyn Gage , whose 1893 essay “ Woman as an Inven­tor ” protest­ed the com­mon asser­tion that “woman… pos­sess­es no inven­tive or mechan­i­cal genius.” Such asser­tions, Gage pro­ceed­ed to demon­strate, “are care­less­ly or igno­rant­ly made… although woman’s sci­en­tif­ic edu­ca­tion has been gross­ly neglect­ed, yet some of the most impor­tant inven­tions of the world are due to her.”

Over 100 years lat­er, Rossiter’s tena­cious work in unearthing the con­tri­bu­tions of U.S. women sci­en­tists inspired the His­to­ry of Sci­ence Soci­ety to name a pres­ti­gious prize after her . The Time­line series pro­files a few of the women whom it describes as prime exam­ples of the Matil­da effect, includ­ing Dr. Lise Meit­ner , the Aus­tri­an-born physi­cist and pio­neer of nuclear tech­nol­o­gy who escaped the Nazis and became known in her time as “the Jew­ish Moth­er of the Bomb,” though she had noth­ing to do with the atom­ic bomb. Instead, “Meit­ner led the research that ulti­mate­ly dis­cov­ered nuclear fis­sion.” But Meit­ner would become “lit­tle more than a foot­note in the his­to­ry of Nazi sci­en­tists and the birth of the Atom­ic age.”

Instead, Meitner’s col­league Otto Hahn received the acco­lades, a Nobel Prize in Chem­istry and “renown as the dis­cov­er­er of nuclear fis­sion. Meit­ner, who direct­ed Hahn’s most sig­nif­i­cant exper­i­ments and cal­cu­lat­ed the ener­gy release result­ing from fis­sion, received a few essen­tial­ist head­lines fol­lowed by decades of obscu­ri­ty.” (See Meit­ner and Hahn in the pho­to above.) Like­wise, the name of Alice Augus­ta Ball has been “all but scrubbed from the his­to­ry of med­i­cine,” though it was Ball, an African Amer­i­can chemist from Seat­tle, Wash­ing­ton, who pio­neered what became known as the Dean Method, a rev­o­lu­tion­ary treat­ment for lep­rosy.

Ball con­duct­ed her research at the Uni­ver­si­ty of Hawaii, but she trag­i­cal­ly died at the age of 24, in what was like­ly a lab acci­dent, before the results could be pub­lished. Instead, Uni­ver­si­ty Pres­i­dent Dr. Arthur Dean, who had co-taught chem­istry class­es with Ball, con­tin­ued her work. But he failed “to men­tion Ball’s key con­tri­bu­tion” despite protes­ta­tions from Dr. Har­ry Holl­mann, a sur­geon who worked with Ball on treat­ing lep­rosy patients. Dean claimed cred­it and pub­lished their work under his name. Decades lat­er, “the scant archival trail of Alice Ball was redis­cov­ered…. In 2000, a plaque was installed at the Uni­ver­si­ty of Hawaii com­mem­o­rat­ing Ball’s accom­plish­ments.”

Oth­er women in the Matil­da effect series include bac­te­r­i­al geneti­cist Esther Leder­berg , who made amaz­ing dis­cov­er­ies in genet­ics that won her hus­band a Nobel Prize; Irish astro­physi­cist Joce­lyn Bell Bur­nell , who dis­cov­ered the first radio pul­sars in 1967, but was exclud­ed from the Nobel award­ed to her the­sis super­vi­sor Antony Hewish and astronomer Mar­tin Ryle. A sim­i­lar fate befell Dr. Ros­alind Franklin , the chemist exclud­ed from the Nobel award­ed to her col­leagues James Wat­son, Fran­cis Crick, and Mau­rice Wilkins for the dis­cov­ery of DNA.

These promi­nent exam­ples are but the tip of the ice­berg when it comes to women who made sig­nif­i­cant con­tri­bu­tions to sci­en­tif­ic his­to­ry and were reward­ed by being writ­ten out of it and denied awards and recog­ni­tion in their life­time. For more on the his­to­ry of U.S. women in sci­ence and the social forces that worked to exclude them, see Mar­garet Rossiter’s three-vol­ume Women Sci­en­tists in Amer­i­ca series: Strug­gles and Strate­gies to 1940 , Before Affir­ma­tive Action, 1940–1972 , and Forg­ing a New World since 1972 . And read Timeline’s Matil­da Effect series of arti­cles here .

Note: An ear­li­er ver­sion of this post appeared on our site in 2018.

Relat­ed Con­tent:

Read the “Don’t Let the Bas­tards Get You Down” Let­ter That Albert Ein­stein Sent to Marie Curie Dur­ing a Time of Per­son­al Cri­sis (1911)

Women Sci­en­tists Launch a Data­base Fea­tur­ing the Work of 9,000 Women Work­ing in the Sci­ences

Marie Curie Attend­ed a Secret, Under­ground “Fly­ing Uni­ver­si­ty” When Women Were Banned from Pol­ish Uni­ver­si­ties

The Ency­clo­pe­dia of Women Philoso­phers: A New Web Site Presents the Con­tri­bu­tions of Women Philoso­phers, from Ancient to Mod­ern

Meet the Physi­cist Who Has Cre­at­ed 1600+ Wikipedia Entries for Impor­tant Female & Minor­i­ty Sci­en­tists

Josh Jones is a writer and musi­cian based in Durham, NC.


You Don't Need an ORM

Lobsters
www.youtube.com
2025-12-09 17:38:51
Comments...

Show HN: Detail, a Bug Finder

Hacker News
detail.dev
2025-12-09 17:35:35
Comments...
Original Article

Let us show you.

Deep scans of your codebase that find bugs you’ll be glad to know about. Security vulnerabilities, data loss, billing mistakes.

Daniel Worku

CTO @ StarSling

We have Graphite, Codex, Cubic, Gemini and Cursor Bugbot running on PRs and the results from our Detail scan still gave me a minor heart attack.

See it on Open Source

How It Works

Add your repo.

Detail will clone it to a secure sandbox and confirm we can build your code properly.

Detail exercises your code.

Detail runs thousands of checks and diagnostics, looking for bugs and selecting the most significant ones.

Receive bugs.

Detail sends you a writeup for each bug we find. Delivered to Linear, Jira, GitHub Issues, or email.

Enterprise Friendly

SOC 2 Type II Certified

Highest standards of compliance with industry best practices. Audit report available upon request.

Zero Data Retention

All code and usage data can be purged based on your requirements. Our model providers retain nothing.

How Private Equity Is Changing Housing

Hacker News
www.theatlantic.com
2025-12-09 17:26:33
Comments...
Original Article

In some communities, corporations control more than 20 percent of properties.

Row of townhouses
Nathan Howard / Bloomberg / Getty

Row of townhouses

Listen to more stories on the Noa app.

We have a housing crisis, as you probably, painfully, know. Wouldn’t you like to have someone to blame for it?

The United States is short 4 million housing units, with a particular dearth of starter homes, moderately priced apartments in low-rises, and family-friendly dwellings. Interest rates are high, which has stifled construction and pushed up the cost of mortgages. As a result, more Americans are renting, and roughly half of those households are spending more than a third of their income on shelter.

This crisis has many causes: restrictive zoning codes, arcane permitting processes, excessive community input , declining construction productivity, expensive labor, and expensive lumber. And, some say, the aggressive entry of private equity into the housing market. Institutional investors have bought up hundreds of thousands of American homes since the start of the coronavirus pandemic, outbidding families and pushing up rents —a trend lamented by everyone from Alexandria Ocasio-Cortez to J. D. Vance .

Casting private equity as a central villain in the country’s real-estate tragedy makes intuitive sense. Who’s going to win in a bidding war for a three-bedroom in a suburb of Cincinnati: a single-income family with a scrabbled-together 10 percent down payment or a Wall Street LLC offering cash? Still, housing economists and policy analysts have argued that institutional investors have played at most a bit part. Supply constraints began cropping up on the coasts a generation ago, if not earlier, whereas Wall Street started buying up significant numbers of homes only after the Great Recession and especially after the pandemic. Moreover, even if big investors are purchasing thousands of homes, they don’t own significant numbers of homes compared with small-scale landlords and individuals.

Yet in some markets, the balance has shifted. Last month, the Lincoln Institute of Land Policy and the Center for Geospatial Solutions published a report showing that corporations now own a remarkable one in 11 residential real-estate parcels in the 500 urban counties with data robust enough to analyze. In some communities, they control more than 20 percent of properties.

I figured that big investors might be picking up vacation rentals in Colorado and expensive apartment buildings in the Bay Area and the Acela Corridor. They are, the report’s authors told me. But these investors are pouring the most money into “buy low, rent high” neighborhoods: communities, many of them in the South and the Rust Belt, where large shares of families can’t afford a mortgage.

“They’re pulling all the starter homes off of the market in low-income, high-minority-density neighborhoods,” George McCarthy, the president of the Lincoln Institute, told me—a trend that is intensifying the country’s yawning racial wealth and homeownership gaps. In Cleveland, corporations own 17.5 percent of residential real-estate parcels. In the city’s East Side, which contains many predominantly Black neighborhoods , just one in five homebuyers in 2021 took out a mortgage. The rest—many investors, presumably—paid in cash or took out a loan from a non-traditional financier .

In Baltimore’s majority-Black McElderry Park and Ellwood Park/Monument neighborhoods, owner-occupants made just 13 percent of purchases in 2022. In a majority-white neighborhood not far away, owner-occupants bought more than 80 percent of homes that same year, and out-of-state corporations owned less than 1 percent of residential parcels.

The report made me see the country’s real-estate crisis in a different light. Private-equity firms and other deep-pocketed investors aren’t why Seattle and Boston are unaffordable. Those cities have had shortage-driven housing crises that have intensified over decades. The firms aren’t why many towns in the Mountain West have seen jumps in home values and a corresponding increase in homelessness, displacement, and eviction. In those communities, white-collar emigrants from big cities have arrived and outbid locals. But investor money is distorting the housing market in communities with low wages and decent-enough housing supply, pushing thousands of Black and Latino families off the property ladder. Tens of thousands of workers who would like to invest in a home are instead stuck paying rent, and putting up with the associated uncertainty.

While not all corporate landlords are bad landlords, some are bad landlords. Corporations are more likely to threaten to evict and to actually evict their tenants . They are also prone to skimping on maintenance and upkeep. “At the neighborhood level, when more than half of the properties are owned by outside investors—when you’ve now flipped that neighborhood from being primarily homeowner driven to investor driven—that matters, because homeowners behave very differently, politically and otherwise,” McCarthy said. An out-of-state investment firm might be less likely than a longtime resident or a local property manager to plant shade trees and demand safe sidewalks, for instance.

In response to the rising corporate ownership of homes, a variety of politicians have pushed for policy fixes. In New York, Governor Kathy Hochul has proposed legislation barring firms from bidding on single-family or two-family homes for the first 75 days they are on the market. Washington State is contemplating capping the number of units that corporations can own . Other legislators have suggested revoking tax benefits from large-scale owners.

McCarthy said that caps probably would not work well: Corporations might simply set up multiple entities to get around the rules and keep purchasing properties, for instance. “It’s just not going to fly,” he said. But he supports treating firms that own more than 10 properties in a given jurisdiction as commercial owners rather than residential owners, subjecting them to higher property-tax rates and higher taxes on their capital gains.

If nothing is done, what’s happening to majority-Black communities in Ohio and Virginia and Georgia and Michigan might start happening in communities around the country. Private equity might not be causing the housing crisis, but corporate owners could end up making it a lot worse for everyone.

If you're going to vibe code, why not do it in C?

Hacker News
stephenramsay.net
2025-12-09 17:11:09
Comments...
Original Article

Stephen Ramsay

Or hell, why not do it in x86 assembly?

Let’s get a few things out of the way before I go any further with this seemingly impertinent thought, because it’s nowhere near as snarky as it sounds.

First, I don’t particularly like vibe coding. I love programming, and I have loved it since I made my first tentative steps with it sometime back in the mid-to-late 90s. I love programming so much, it always feels like I’m having too much fun for it to count as real work. I’ve done it professionally, but I also do it as a hobby. Someone apparently once said, “Do what you love and you’ll never work a day in your life.” That’s how I feel about writing code. I’ve also been teaching the subject for twenty-five years, and I can honestly say I am as excited about the first day of the semester now as I was when I first started. I realize it’s a bit precious to say so, but I’ll say it anyway: Turning non-programmers into programmers is my life’s work. It is the thing of which I am most proud as a college professor.

Vibe coding makes me feel dirty in ways that I struggle to articulate precisely. It’s not just that it feels like “cheating” (though it does). I also think it takes a lot of the fun out of the whole thing. I sometimes tell people (like the aforementioned students) that programming is like doing the best crossword puzzle in the world, except that when you solve it, it actually dances and sings. Vibe coding robs me of that moment, because I don’t feel like I really did it at all. And even though to be a programmer is to live with a more-or-less permanent set of aporias (you don’t really understand what the compiler is doing, really—and even if you do, you probably don’t really understand how the virtual memory subsystem works, really), it’s satisfying to understand every inch of my code and frustrating—all the way to the borderlands of active anxiety—not quite understanding what Claude just wrote.

But this leads me to my second point, which I must make as clearly and forcefully as I can. Vibe coding actually works. It creates robust, complex systems that work. You can tell yourself (as I did) that it can’t possibly do that, but you are wrong. You can then tell yourself (as I did) that it’s good as a kind of alternative search engine for coding problems, but not much else. You are also wrong about that. Because when you start giving it little programming problems that you can’t be arsed to work out yourself (as I did), you discover (as I did) that it’s awfully good at those. And then one day you muse out loud (as I did) to an AI model something like, “I have an idea for a program…” And you are astounded. If you aren’t astounded, you either haven’t actually done it or you are at some stage of grief prior to acceptance. Perfect? Hardly. But then neither are human coders. The future? I think the questions answers itself.

But to get to my impertinent question…

Early on in my love affair with programming, I read Structure and Interpretation of Computer Programs, which I now consider one of the great pedagogical masterpieces of the twentieth century. I learned a great deal about programming from that book, but among the most memorable lessons was one that appears in the second paragraph of the original preface. There, Hal Abelson and Gerald Sussman make a point that hits with the force of the obvious, and yet is very often forgotten:

[W]e want to establish the idea that a computer language is not just a way of getting a computer to perform operations but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.

I’ve been repeating some version of this to my students ever since. Computers, I remind them, do not need the code to be “readable” or “ergonomic” for humans; they only need it to be readable and ergonomic for a computer, which is a considerably lower bar.

Every programming language— including assembly language —was and is intended for the convenience of humans who need to read it and write it. If a language is innovative, it is usually not because it has allowed for automatic memory management, or concurrency, or safety, or robust error checking, but because it has made it easier for humans to express and reason about these matters. When we extol the virtues of this or that language—Rust’s safety guarantees, C++’s “no-cost abstractions,” or Go’s approach to concurrency—we are not talking about an affordance that the computer has gained, but about an affordance that we have gained as programmers of said computer. From our standpoint as programmers, object-oriented languages offer certain ways to organize our code—and, I think Abelson and Sussman would say, our thinking—that are potentially conducive to the noble treasures of maintainability, extensibility, error checking, and any number of other condign matters. From the standpoint of the computer, this little OO kink of ours seems mostly to indicate a strange affinity for heap memory. “Whatevs!” (says the computer). And pick your poison here, folks: functional programming, algebraic data types, dependent types, homoiconicity, immutable data structures, brace styles… We can debate the utility of these things, but we must understand that we are primarily talking about human problems. The set of “machine problems” to which these matters correspond is considerably smaller.

So my question is this: Why vibe code with a language that has human convenience and ergonomics in view? Or to put that another way: Wouldn’t a language designed for vibe coding naturally dispense with much of what is convenient and ergonomic for humans in favor of what is convenient and ergonomic for machines? Why not have it just write C? Or hell, why not x86 assembly?

Now, at this point, you will want to say that the need for human understanding isn’t erased entirely thereby. Some version of this argument has merit, but I would remind you that if you are really vibe coding for real you already don’t understand a great deal of what it is producing. But if you look carefully, you will notice that it doesn’t struggle with undefined behavior in C. Or with making sure that all memory is properly freed. Or with off-by-one errors. It sometimes struggles to understand what it is that you actually want, but it rarely struggles with the actual execution of the code. It’s better than you are at keeping track of those things in the same way that a compiler is better at optimizing code than you are. Perfect? No. But as I said before…

Is C the ideal language for vibe coding? I think I could mount an argument for why it is not, but surely Rust is even less ideal. To say nothing of Haskell, or OCaml, or even Python. All of these languages, after all, are for people to read, and only incidentally for machines to execute. They are practically adorable in their concern for problems that AI models do not have.

I suppose what I’m getting at, here, is that if vibe coding is the future of software development (and it is), then why bother with languages that were designed for people who are not vibe coding? Shouldn’t there be such a thing as a “vibe-oriented programming language?” VOP. You read it here first.

One possibility is that such a language truly would be executable pseudocode beyond even the most extravagant fever dreams of the most earnest Pythonistas; it shows you what it’s doing in truly pseudo code, but all the while it’s writing assembly. Or perhaps it’s something like the apotheosis of literate programming. You write a literary document “expressing ideas about methodology,” and the AI produces machine code (and a kind of literary critical practice evolves around this activity, eventually ordering itself into structuralist and post-structuralist camps. But I’m getting ahead of myself). Perhaps your job as a programmer is mostly running tests that verify this machine code (tests which have also been produced by AI). Or maybe a VOPL is really a certain kind of language that comes closer to natural language than any existing programming language, but which has a certain (easily learned) set of idioms and expressions that guide the AI more reliably and more quickly toward particular solutions. It doesn’t have goroutines. It has a “concurrency slang.”

Now obviously, the reason a large language model focused on coding is good at Javascript and C++ is precisely because it has been trained on billions of lines of code in those languages along with countless forum posts, StackOverflow debates, and so on. Bootstrapping a VOPL presents a certain kind of difficulty, but then one also suspects that LLMs are already being trained in some future version of this language, because so many programmers are already groping their way toward a system like this by virtue of the fact that so many of them are already vibe coding production-level systems.

I don’t know how I feel about all of this (see my first and second points above). It saddens me to think of “coding by hand” becoming a kind of quaint Montessori-school stage in the education of a vibe coder—something like the contour drawings we demand from future photoshopers or the balanced equations we insist serve as a rite of passage for people who will never be without a calculator to the end of their days.

At the same time, there is something exciting about the birth of a computational paradigm. It wasn’t that long ago, in the grand scheme of things, that someone realized that rewiring the entire machine every time you wanted to do a calculation (think ENIAC, circa 1945) was a rather suboptimal way to do things. And it is worth recalling that people complained when the stored-program computer rolled around (think EDVAC, circa 1951). Why? Well, the answer should be obvious. It was less reliable. It was slower. It removed the operator from the loop. It threatened specialized labor. It was conceptually impure. I’m not kidding about any of this. No less an authority than Grace Hopper had to argue against the quite popular idea that there was no way anyone could ever trust a machine to write instructions for another machine.

Same vibe, as the kids say.


Incoming: home | blog | index

Keywords: programming, AI

Last Modified: 2025-12-07T16:29:42:-0600

Ivanti warns of critical Endpoint Manager code execution flaw

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 17:10:25
American IT software company Ivanti warned customers today to patch a newly disclosed vulnerability in its Endpoint Manager (EPM) solution that could allow attackers to execute code remotely. [...]...
Original Article

Ivanti

American IT software company Ivanti warned customers today to patch a newly disclosed vulnerability in its Endpoint Manager (EPM) solution that could allow attackers to execute code remotely.

Ivanti delivers system and IT asset management solutions to over 40,000 companies via a network of more than 7,000 organizations worldwide. The company's EPM software is an all-in-one endpoint management tool for managing client devices across popular platforms, including Windows, macOS, Linux, Chrome OS, and IoT.

Tracked as CVE-2025-10573 , this critical security flaw can be exploited by unauthenticated threat actors in low-complexity cross-site scripting attacks that require user interaction.

"Stored XSS in Ivanti Endpoint Manager prior to version 2024 SU4 SR1 allows a remote unauthenticated attacker to execute arbitrary JavaScript in the context of an administrator session," Ivanti said .

Ivanti noted that the risk of this vulnerability should be significantly reduced because the Ivanti EPM solution is not intended to be exposed online.

However, the Shadowserver threat monitoring platform currently tracks hundreds of Internet-facing Ivanti EPM instances , most of which are in the United States (569), Germany (109), and Japan (104).

Ivanti EPMM instances exposed online
Ivanti EPMM instances exposed online (Shadowserver)

​​Today, Ivanti also released security updates to address three high-severity vulnerabilities, two of which (CVE-2025-13659 and CVE-2025-13662) could allow unauthenticated attackers to execute arbitrary code on unpatched systems.

Luckily, successful exploitation also requires user interaction and the targets to either connect to an untrusted core server or import untrusted configuration files.

"We are not aware of any customers being exploited by these vulnerabilities prior to public disclosure. These vulnerabilities were disclosed through our responsible disclosure program," Ivanti added.

While Ivanti has yet to discover evidence of exploitation in attacks, Ivanti EPM security flaws are often targeted by threat actors.

Earlier this year, in March, CISA tagged three critical vulnerabilities affecting EPM appliances (CVE-2024-13159, CVE-2024-13160, and CVE-2024-13161) as exploited in attacks and warned U.S. federal agencies to secure their networks within three weeks.

The U.S. cybersecurity agency ordered government agencies to patch another actively exploited EPM flaw (CVE-2024-29824) in October 2024.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Maintaining enterprise IT hygiene using Wazuh SIEM/XDR

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 17:09:33
Poor IT hygiene, such as unused accounts, outdated software, and risky extensions, creates hidden exposure in your infrastructure. Wazuh, the open-source XDR and SIEM, shows how continuous inventory monitoring across endpoints helps teams spot drift and tighten security. [...]...
Original Article

Wazuh

Organizations face the challenge of maintaining visibility and control over their IT infrastructure. A forgotten user account, an outdated software package, an unauthorized service, or a malicious browser extension can expose vulnerabilities that threat actors are eager to exploit.

Addressing these risks requires a systematic approach to maintaining the security and integrity, and overall health of every system within the organization. This is where IT hygiene becomes essential.

IT hygiene is the systematic practice of maintaining consistent, secure configurations across all endpoints in an organization's infrastructure. It encompasses continuous monitoring of hardware, software, user accounts, running processes, and network configurations to ensure alignment with security policies and compliance requirements.

Poor IT hygiene creates security gaps that can lead to data breaches, system compromises, and significant financial and reputational damage.

Wazuh is a free, open source security platform that provides multiple capabilities, including a dedicated IT hygiene capability, file integrity monitoring, configuration assessment, vulnerability detection, and active response.

This post explores how organizations can leverage Wazuh to maintain enterprise IT hygiene, examines practical use cases, and demonstrates its effectiveness in improving their security posture.

IT hygiene overview

IT hygiene encompasses the preventive measures organizations implement to maintain the health and security of their IT infrastructure. It reduces the risk of security incidents by ensuring systems remain properly configured, up to date, and monitored.

Key aspects include:

  • Asset visibility: Maintaining a comprehensive, up-to-date inventory of all hardware and software assets across your infrastructure.
  • Configuration management: Ensuring systems are configured in accordance with security best practices and organizational policies. These include minimizing services, ports, and software, as well as authentication and account hardening configurations.
  • Patch management: Regularly updating software to address known vulnerabilities.
  • Access control: Managing user accounts and permissions to prevent unauthorized access.
  • Monitoring and auditing: Continuously tracking system activities and configurations to detect anomalies.

Without proper IT hygiene practices, organizations become vulnerable to threats such as unauthorized access, malware infections, data exfiltration, and compliance violations.

The Wazuh IT hygiene capability

Wazuh introduced its IT hygiene capability in version 4.13.0, providing security teams with a centralized dashboard for monitoring system inventory across an entire infrastructure.

The capability leverages the Wazuh Syscollector module to gather and aggregate data from all monitored endpoints, storing it in dedicated indices within the Wazuh indexer for querying and analysis.

The Wazuh IT hygiene capability collects system inventory data, including:

  • Hardware specifications such as CPU, memory, and  storage data
  • Operating system details and versions
  • Installed software packages and their versions
  • Running processes and services
  • Network configurations and open ports
  • User accounts and group memberships
  • Browser extensions and their permissions

This data is presented through an intuitive dashboard interface that enables security administrators to query and analyze inventory information across multiple endpoints simultaneously, eliminating the need for time-consuming manual checks.

Accessing the IT hygiene dashboard

Users can access inventory data through the Wazuh dashboard by navigating to Security operations > IT hygiene . The interface provides multiple tabs for different inventory categories:

IT Hygiene dashboard

Each tab allows administrators to add custom filters to refine queries and select additional fields to display. This flexibility enables security teams to quickly identify configuration changes, policy violations, and security anomalies across their infrastructure.

Practical use cases for enterprise IT hygiene

Software patch management

Maintaining consistent software versions across all endpoints is critical for security, stability, and compliance. Inconsistent package versions introduce exploitable vulnerabilities and can violate organizational patching policies. Manually verifying software versions across thousands of endpoints is impractical and error-prone.

The Wazuh IT hygiene capability provides comprehensive visibility into installed packages across the entire infrastructure. Security administrators can:

  • Identify endpoints running outdated or vulnerable software versions
  • Detect unauthorized software installations
  • Verify compliance with approved software catalogs

For example, administrators can use the filters on the Packages tab to identify all endpoints running a specific version of a critical application or library. By applying filters on fields such as package.name and the field package.version , security teams can quickly generate a list of endpoints requiring package updates, significantly streamlining the patch management process.

IT Hygiene packages

Browser extension management

Browser extensions are an increasingly exploited attack surface, particularly in enterprise environments. Extensions with broad permissions can access sensitive data, inject malicious scripts, intercept credentials, and serve as malware vectors. Recent security incidents have involved fake ad blockers and password managers used in credential theft campaigns.

The Wazuh IT hygiene capability provides complete visibility into browser extensions across all monitored endpoints, including:

  • Extension names and versions
  • Requested permissions (tabs, storage, webRequest, and so on.)
  • Installation dates and sources
  • User associations

Security teams can use this information to identify unauthorized or high-risk extensions, detect extensions with excessive permissions, and enforce browser extension policies. This enables them to respond quickly to reports of malicious extensions.

IT Hygiene extension details

Identity management

The Identity section of the Wazuh IT hygiene enables account auditing to ensure that user identities and permissions remain aligned with organizational policies across the entire infrastructure. Administrators can audit user information by applying the filters within the Users and Groups dashboard.

The following use case demonstrates dormant account detection to identify inactive or unnecessary accounts, and privilege account verification to ensure only authorized users hold elevated permissions.

Dormant account detection

Dormant or abandoned user accounts pose significant security risks. These accounts, often belonging to former employees or contractors, can be exploited by attackers for unauthorized access. They represent forgotten attack vectors that may lack current security controls, such as multi-factor authentication, and thus present an entry point for attackers.

The Wazuh IT hygiene capability enables organizations to identify dormant accounts systematically. Administrators can:

a. Navigate to Security operations > IT Hygiene > Identity > Users .

b. Filter accounts based on criteria such as:

  • Accounts with valid login shells (indicating interactive access)
  • Last login dates beyond organizational policies
  • Accounts without recent activity

c. Generate lists of accounts requiring review or deactivation

IT Hygiene accounts list

For example, the above image shows users filtered for user.shell values such as /bin/bash or /bin/sh to identify accounts capable of interactive system access. Cross-referencing this data with the details from user.last.login field reveals dormant accounts that should be investigated or removed.

Privileged account auditing

Unauthorized users with administrative privileges pose a critical security risk. Accounts in the local Administrators group (Windows) or sudo group (Linux) can install software, modify system configurations, disable security controls, and access sensitive data.

Even if rarely used, these accounts are valuable targets for attackers seeking to maintain persistence and escalate privileges.

The Wazuh IT hygiene capability allows security teams to:

  • Identify all users with elevated privileges across the infrastructure
  • Verify that only authorized personnel have administrative access
  • Detect privilege escalation attempts or policy violations
  • Maintain compliance with access control policies

Administrators can use filters in the Groups tab within the Identity section of the Wazuh IT hygiene dashboard to identify members of privileged groups.

Administrators can then cross-reference these results against authorized user lists to identify accounts with unauthorized privilege assignments.

IT Hygiene groups

Hardware resource optimization

In large enterprise environments with numerous Linux and Windows endpoints, mismatched hardware specifications can lead to significant operational challenges.

Servers with insufficient CPU cores or memory create performance bottlenecks that impact critical workloads, while oversized instances waste resources and drive unnecessary cloud computing costs.

The Wazuh IT hygiene capability enables resource analysis across all devices, allowing administrators to:

  • Identify endpoints that fall outside policy-defined specifications
  • Detect underpowered systems affecting critical services
  • Find oversized instances wasting budget
  • Optimize cloud resource allocation
  • Plan capacity upgrades based on actual usage patterns

For example, administrators can use the filters within the Hardware tab to identify all servers with memory below a defined threshold (for example, 8GB for web servers) or systems with excessive resources that could be downsized.

IT Hygiene hardware

This data-driven approach supports both cost optimization and reliability improvements without requiring manual inspection of individual endpoints.

Port and service monitoring

Unnecessary open ports and unauthorized services expand the attack surface. Each open port is a potential entry point for attackers, and unauthorized services may contain vulnerabilities or misconfigurations that compromise security.

The Wazuh IT hygiene capability provides comprehensive visibility into:

  • All open network ports across endpoints
  • Services listening on each port
  • Process associations for running services
  • Port states and configurations

Security teams can use the filter within the Ports tab to identify endpoints with unexpected open ports or unauthorized services. For instance, database ports (3306, 5432) should not be open on workstations or web servers. They should be restricted to internal networks or specific application servers only.

IT Hygiene port information

Best practices for implementing IT hygiene with Wazuh

To maximize the benefits of Wazuh IT hygiene capabilities, organizations should follow these best practices:

1. Establish baseline inventories: Document expected configurations, approved software, authorized accounts, and standard hardware specifications for different endpoint types. Create explicit policies for software versions, user account lifecycles, browser extensions, privileged access, and hardware standards.

2. Automate alerting: Configure Wazuh to generate alerts for critical deviations such as new privileged accounts, unauthorized software installations, or suspicious browser extensions.

3. Integrate with workflows: Connect IT hygiene findings with existing ticketing systems, patch management tools, and incident response processes.

4. Maintain documentation: Keep detailed records of authorized exceptions, approved changes, and remediation actions taken in response to hygiene issues.

5. Leverage other Wazuh modules: Leverage SCA, vulnerability detection, and malware detection alongside IT hygiene for comprehensive security coverage.

6. Schedule regular reviews: Conduct periodic audits of inventory data to identify drift from baseline configurations and policy violations.

7. Train security teams: Ensure personnel understand how to effectively query and interpret IT hygiene data to identify security risks.

Conclusion

Maintaining IT hygiene reduces the risk of security incidents by keeping systems correctly configured, patched, and monitored. The Wazuh IT hygiene capability meets this need by providing a centralized, real-time inventory across all endpoints.

Security teams can quickly spot policy violations, configuration drift, and security anomalies using holistic data on hardware, software, accounts, processes, ports, and browser extensions, enabling informed, data-driven decisions.

Visit the Wazuh website or join the Wazuh community to learn more.

Sponsored and written by Wazuh .

PeerTube is recognized as a digital public good by Digital Public Goods Alliance

Hacker News
www.digitalpublicgoods.net
2025-12-09 17:08:37
Comments...
Original Article

Verified DPG logo

Verified DPG

DPG logo image

Release date

-

DPG since

-

Description

PeerTube is a tool for hosting, managing, and sharing videos or live streams.


Core Components Assessed/Included Repositories

The following repositories were submitted by the solution and included in our evaluation. Any repositories, add-ons, features not included in here were not reviewed by us.

Feature

livestreaming

video-hosting

Scale of the Solution*

Available Languages

Esperanto, English, Slovenčina, Gàidhlig, العربية, Norsk, Magyar, Deutsch, Toki Pona, Euskara, Polski, Português (Portugal), Suomi, Tiếng Việt, Italiano, فارسی, Español, Taqbaylit, 简体中文(中国), Hrvatski, ελληνικά, Occitan, украї́нська мо́ва, Français, ไทย, Türkçe, 繁體中文(台灣), 日本語, Galego, Íslenska, Svenska, Nederlands, Pусский, bokmål, Čeština, Shqip, Català, Português (Brasil), Norsk nynorsk

Organisations using it

French Ministry of National Education (~100K videos), Italy’s National Research Council, a few French alternative media, the Weißensee Kunsthochschule in Berlin, as well as the Universität der Künste in the same city, a few universities worldwide, the Blender and Debian projects, and various activist groups

* This information is self-reported and updated annually

Github insights

Learn how this product has met the requirements of the DPG Standard by exploring the indicators below.

Application Details


DPG ID

GID0092472


Status

DPG


Date Created

2025-08-11


Date Submitted

2025-08-25


Date Reviewed

2025-10-07


Date of Expiry

2026-10-07


Application Log Details

Timestamp

Activity

2025-10-07 08:40:13

Ricardo Torres (L2 Reviewer) submitted their review of PeerTube (152) and found it to be a DPG

2025-10-07 08:40:12

System unmarked PeerTube (12958) as a nominee

2025-10-07 08:40:07

Ricardo Torres (L2 Reviewer) passed 4. Platform Independence for PeerTube (12958)

2025-10-07 08:40:02

Ricardo Torres (L2 Reviewer) moved PeerTube (12958) to under review

2025-10-07 08:38:21

Ricardo Torres (L2 Reviewer) finished consultation on 4. Platform Independence for PeerTube (12958)

Donating the Model Context Protocol and Establishing the Agentic AI Foundation

Hacker News
www.anthropic.com
2025-12-09 17:05:42
Comments...
Original Article

Today, we’re donating the Model Context Protocol (MCP) to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation , co-founded by Anthropic, Block and OpenAI, with support from Google, Microsoft, Amazon Web Services (AWS), Cloudflare, and Bloomberg.

One year ago, we introduced MCP as a universal, open standard for connecting AI applications to external systems. Since then, MCP has achieved incredible adoption:

  • Across the ecosystem: There are now more than 10,000 active public MCP servers, covering everything from developer tools to Fortune 500 deployments;
  • Across platforms: MCP has been adopted by ChatGPT, Cursor, Gemini, Microsoft Copilot, Visual Studio Code, and other popular AI products;
  • Across infrastructure: Enterprise-grade infrastructure now exists with deployment support for MCP from providers including AWS, Cloudflare, Google Cloud, and Microsoft Azure.
Significant Milestone in MCP's first year

We’re continuing to invest in MCP’s growth. Claude now has a directory with over 75 connectors (powered by MCP), and we recently launched Tool Search and Programmatic Tool Calling capabilities in our API to help optimize production-scale MCP deployments, handling thousands of tools efficiently and reducing latency in complex agent workflows.

MCP now has an official, community-driven Registry for discovering available MCP servers, and the November 25th spec release introduced many new features, including asynchronous operations, statelessness, server identity, and official extensions. There are also official SDKs (Software Development Kits) for MCP in all major programming languages with 97M+ monthly SDK downloads across Python and TypeScript.

Since its inception, we’ve been committed to ensuring MCP remains open-source, community-driven and vendor-neutral. Today, we further that commitment by donating MCP to the Linux Foundation.

The Linux Foundation and the Agentic AI Foundation

The Linux Foundation is a non-profit organization dedicated to fostering the growth of sustainable, open-source ecosystems through neutral stewardship, community building, and shared infrastructure. It has decades of experience stewarding the most critical and globally-significant open-source projects, including The Linux Kernel, Kubernetes, Node.js, and PyTorch. Importantly, the Linux Foundation has a proven track record in facilitating open collaboration and maintaining vendor neutrality.

The Agentic AI Foundation (AAIF) is a directed fund under the Linux Foundation co-founded by Anthropic, Block and OpenAI , with support from Google , Microsoft , AWS , Cloudflare and Bloomberg . The AAIF aims to ensure agentic AI evolves transparently, collaboratively, and in the public interest through strategic investment, community building, and shared development of open standards.

Donating the Model Context Protocol

Anthropic is donating the Model Context Protocol to the Linux Foundation's new Agentic AI Foundation, where it will join goose by Block and AGENTS.md by OpenAI as founding projects. Bringing these and future projects under the AAIF will foster innovation across the agentic AI ecosystem and ensure these foundational technologies remain neutral, open, and community-driven.

The Model Context Protocol’s governance model will remain unchanged: the project’s maintainers will continue to prioritize community input and transparent decision-making.

The future of MCP

Open-source software is essential for building a secure and innovative ecosystem for agentic AI. Today’s donation to the Linux Foundation demonstrates our commitment to ensuring MCP remains a neutral, open standard. We’re excited to continue contributing to MCP and other agentic AI projects through the AAIF.

Learn more about MCP at modelcontextprotocol.io and get involved with the AAIF here .

Related content

Accenture and Anthropic launch multi-year partnership to move enterprises from AI pilots to production

Read more

Snowflake and Anthropic announce $200 million partnership to bring agentic AI to global enterprises

Read more

Anthropic acquires Bun as Claude Code reaches $1B milestone

Read more

Clearspace (YC W23) Is Hiring a Founding Designer

Hacker News
www.ycombinator.com
2025-12-09 17:01:11
Comments...
Original Article

Eliminate compulsive phone usage

Founding Designer at Clearspace

$150K - $170K 0.50% - 1.00% San Francisco, CA, US

Role

Design, Product design

Connect directly with founders of the best YC-funded startups.

Apply to role ›

About the role

About Clearspace

Clearspace is building the intentionality layer of the internet. Our mission is to build technology as effective at protecting human attention as social media is at exploiting it (infinite scrolling, short-form feeds, manipulative notifications, etc). Our category defining mobile app has been featured on Huberman Lab, New York Times Wirecutter, NPR Marketplace, Forbes, TBPN.

People that want a better relationship with their devices have nowhere to turn except for willpower. We are building an agent that achieves this on all devices by processing and filtering network traffic based on natural language rules.

About The Role

We are looking for a lead designer with strong aesthetic intuition and an obsession with designing through every inch of the user journey. You will be asked to bring pixel perfect designs to life across several different platforms, if you don’t love the process of designing this is not the role for you. You will be talking to users often and asked to speak to the overall brand direction at Clearspace.

Responsibilities

  • Design agent-first UI/UX patterns for the Clearspace platform
  • Create a design system that spans across mobile, web, desktop
  • Work directly with the founders and shape product direction
  • Move fast and autonomously

Qualifications

  • 1+ years of professional product design in a consumer context
  • Experience creating a design system from scratch
  • Willing to work onsite in San Francisco

Nice to Have

  • Have had or considered Creative Director roles
  • Have examples of creating beautiful things outside of designs (physical art, video, music, etc)

About Clearspace

At Clearspace we help people reduce compulsive phone usage.

We exist to protect people's attention from the exploits of modern technology platforms and make space for the things that matter to them most.

We believe the technology to protect someones attention should be just as sophisticated and effective as the tech that is exploiting it and are building a world-class engineering team to arm the world with a comprehensive attention protection stack.

Clearspace

Founded: 2022

Batch: W23

Team Size: 5

Status: Active

Location: San Francisco

Founders

Spain arrests teen who stole 64 million personal data records

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 16:57:06
The National Police in Spain have arrested a suspected 19-year-old hacker in Barcelona, for allegedly stealing and attempting to sell 64 million records obtained from breaches at nine companies. [...]...
Original Article

Arrest

The National Police in Spain have arrested a suspected 19-year-old hacker in Barcelona, for allegedly stealing and attempting to sell 64 million records obtained from breaches at nine companies.

The teen now faces charges related to involvement in cybercrime, unauthorized access and disclosure of private data, and privacy violations.

"The cybercriminal accessed nine different companies where he obtained millions of private personal records that he later sold online," reads the police's announcement.

The police launched an investigation into the cybercriminal in June, after the authorities became aware of breaches at the unnamed firms.

Eventually, the suspect was located in Igualada, Barcelona, and it was confirmed that he held 64,000,000 private records. These records include full names, home addresses, email addresses, phone numbers, DNI numbers, and IBAN codes.

It is unclear how many total individuals were impacted by the breach.

The police mention that the detainee attempted to sell the information on various hacker forums, using six different accounts and five pseudonyms.

The 19-year-old was arrested last week, and during the action, police agents also confiscated computers and cryptocurrency wallets containing funds believed to be from data sales.

Data broker also arrested in Ukraine

In parallel but unrelated news, the cyberpolice in Ukraine have announced the arrest of a 22-year-old cybercriminal who used a custom malware he developed to automatically hack user accounts on social networks and other platforms.

Most of the hacker's victims were based in the United States and various European countries.

The offender then proceeded to sell access to the compromised accounts, which he boosted using a bot farm of 5,000 accounts, on various hacking forums.

The arrested man now faces up to 15 years in prison for violations of Ukraine's Criminal Code (Article 361), as well as deprivation of the right to hold certain positions or engage in certain activities for up to three years.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Mean Girl Tish James Sues Lindsay Lohan's Brother for Alleged Scheme to Destabilize More Than 150 NYC Apartments

hellgate
hellgatenyc.com
2025-12-09 16:46:21
Michael Lohan is one of the Peak Capital Advisor principals named in a lawsuit filed by the New York Attorney General....
Original Article

Actor Lindsay Lohan's younger brother Michael Lohan is one of seven real estate speculators accused last week by New York Attorney General Letitia James of conspiring to illegally deregulate more than 150 rent-stabilized apartments in Brooklyn and Queens.

On December 1, Attorney General James and New York's affordable housing agency, Homes and Community Renewal (HCR), filed a lawsuit against Peak Capital Advisors and its founders and principals, one of whom is Lohan. The lawsuit alleges that, since 2019, Peak has bought 31 buildings, including in Greenpoint, Astoria, Sunnyside, and Long Island City, and converted more than 150 rent-stabilized apartments in those buildings to market-rate units by falsely claiming they qualified for deregulation under the "substantial rehabilitation" exemption in state housing law .

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Launch HN: Mentat (YC F24) – Controlling LLMs with Runtime Intervention

Hacker News
playground.ctgt.ai
2025-12-09 16:37:55
Comments...

Launch HN: Mentat (YC F24) – Controlling LLMs with Runtime Intervention

Hacker News
news.ycombinator.com
2025-12-09 16:37:55
Comments...
Original Article

Hi HN, I’m Cyril from CTGT. Today we’re launching Mentat ( https://docs.ctgt.ai/api-reference/endpoint/chat-completions ), an API that gives developers deterministic control over LLM behavior, steering reasoning and removing bias on the fly, without the compute of fine-tuning or the brittleness of prompt engineering. We use feature-level intervention and graph-based verification to fix hallucinations and enforce policies.

This resonates in highly regulated industries or otherwise risky applications of AI where the fallout from incorrect or underperforming output can be significant. In financial services, using GenAI to scan for noncompliant communications can be arduous without an easy way to embed complex policies into the model. Similarly, a media outlet might want to scale AI-generated summaries of their content, but reliability and accuracy is paramount. These are both applications where Fortune 500 companies have utilized our technology to improve subpar performance from existing models, and we want to bring this capability to more people.

Here’s a quick 2-minute demo video showing the process: https://video.ctgt.ai/video/ctgt-ai-compliance-playground-cf...

Standard "guardrails" like RAG and system prompts are fundamentally probabilistic: you are essentially asking the model nicely to behave. This often fails in two ways. First, RAG solves knowledge availability but not integration . In our benchmarks, a model given context that "Lerwick is 228 miles SE of Tórshavn" failed to answer "What is 228 miles NW of Lerwick?" because it couldn't perform the spatial inversion.

Second, prompt engineering is brittle because it fights against the model's pre-training priors. For example, on the TruthfulQA benchmark, base models fail ~80% of the time because they mimic common misconceptions found on the internet (e.g. "chameleons change color for camouflage"). We found that we could literally turn up the feature for "skeptical reasoning" to make the model ignore the popular myth and output the scientific fact. This matters because for high-stakes use cases (like Finance or Pharma), "mostly safe" isn't acceptable—companies need audit-grade reliability.

Our work stems from the CS dungeon at UCSD, with years spent researching efficient and interpretable AI, trying to "open the black box" of neural networks. We realized that the industry was trying to patch model behavior from the outside (prompts/filters) when the problem was on the inside (feature activations). We knew this was important when we saw enterprises struggling to deploy basic models despite having unlimited compute, simply because they couldn't guarantee the output wouldn't violate compliance rules. I ended up leaving my research at Stanford to focus on this.

Our breakthrough came while researching the DeepSeek-R1 model. We identified the "censorship" feature vector in its latent space. Amplifying it guaranteed refusal; subtracting it instantly unlocked answers to sensitive questions. This proved the model had the knowledge but was suppressing it. We realized we could apply this same logic to hallucinations, suppressing "confabulation" features to reveal the grounded truth. While some hallucinations stem from the inherent randomness of generative models, many can be identified with the concerted activation of a feature or group of features.

Instead of filtering outputs, we intervene at the activation level during the forward pass. We identify latent feature vectors (v) associated with specific behaviors (bias, misconception) and mathematically modify the hidden state (h):

  h_prime = h - alpha * (h @ v) * v

This arithmetic operation lets us "edit" behavior deterministically with negligible overhead (<10ms on R1). For factual claims, we combine this with a graph verification pipeline (which works on closed weight models). We check semantic entropy (is the model babbling?) and cross-reference claims against a dynamic knowledge graph to catch subtle relational hallucinations that vector search misses.

On GPT-OSS-120b, this approach improved TruthfulQA accuracy from 21% to 70% by suppressing misconception features. We also improved the performance of this model to frontier levels on HaluEval-QA, where we reached 96.5% accuracy, solving the spatial reasoning failures where the baseline failed. It also handles noisy inputs, inferring "David Icke" from the typo "David Of me" where base models gave up. Full benchmarks at https://ctgt.ai/benchmarks .

Most startups in this space are observability tools that tell you only after the model failed. Or they are RAG pipelines that stuff context into the window. Mentat is an infrastructure layer that modifies the model's processing during inference. We fix the reasoning, not just the context. For example, that’s how our system was able to enforce that if A is SE of B, then B is NW of A.

We believe that our policy engine is a superior control mechanism to RAG or prompting. If you’re frustrated with current guardrails, we’d love it if you would stress-test our API!

API: Our endpoint is drop-in compatible with OpenAI’s /v1/chat/completions: https://docs.ctgt.ai/api-reference/endpoint/chat-completions

Playground: We’ve built an "Arena" view to run side-by-side comparisons of an Ungoverned vs. Governed model to visualize the intervention delta in real-time. No signup is required: https://playground.ctgt.ai/

We’d love to hear your feedback on the approach and see what edge cases you can find that break standard models. We will be in the comments all day. All feedback welcome!

why i think jj-vcs is worth your time

Lobsters
schpet.com
2025-12-09 16:36:07
Comments...
Original Article

i decided to take a look outside of my git comfort zone to see what’s out there. and wow, i’m glad i did, because i came across jj-vcs and, it’s hard to emphasize this enough: it’s a delight. it lets me operate on commit graph like i’m playing with lego.

jj is a version control system, like git. 1 1 did you write javascript in the 2010s and remember asking important questions like what is this anyway? why am i binding this ? i’ve been told prototypal inheritance is not that complicated but my real takeaway is learning what it means to be gaslit. and then the tide rose: ES6 and typescript are the tools folks use today. both the experience of writing js and the artifacts you see from people of all skill levels are better.

i like to imagine a similar shift is happening in version control, where the experience of using it is going to improve a lot, and that’ll downstream (upstream?) into improvements in vcs’ across the board.

it allows using git as a backend, so you have access to that world: collaborating with your coworkers on github, or running programs on your computer that use a git repo. it blends in. it also gives you access to a new world of jj, where the commands are so consistent you can intuit the flags easily, and tailor it to many different workflows.

here’s some reasons why i think you should try it out:

  • in addition to SHAs , a commit has a change id: it lets you have multiple versions of a commit, making every commit no matter where it lives, easily amendable. it’s like a primary key for your change, and lets you really refine all parts of a change over time, including both the diff and the commit message.
  • at any point if you want to work on something else, there is no need to commit or stash your changes, you jj new to where you want to be and don’t risk losing anything.
  • you don’t have to name branches, you simply push them up and they get generated names.
  • it’s easy to work on top of a merge of a bunch of branches all at once 2 2 colloquially known as a megamerge – it’s basically a merge of a few different parents. so you’re essentially working on a bunch of branches at once. super nice if you’ve got a few different pull requests out that you want to enjoy before they land.
  • you’re always working on committed code and you don’t have to add anything for it to be tracked – because of this, the commands you use to change the commit that you’re working on vs changing any other commit are the same, it feels very natural to mess with any commit as you wish. 3 3 there’s also a great concept of immutable vs mutable commits, by default the commits in your main branch or trunk are immutable, so you are prevented from messing with them
  • with git, rebasing is error prone enough that i just started merging trunk into my feature branches 4 4 hey if fossil doesn’t rebase why should i? and using github’s squash merges – this is ugly in a few ways: it destroys your history and clutters up a pull request with merge commits, a lot of noise. jj automatically rebases things all the time, allowing me to easily make pull requests i’m not ashamed of, and even allows me to rebase a bunch of pull requests at once in one fell swoop.

that’s not even touching on the more novel things, like absorb , revsets , templates – there are many gifts behind this executable.

if i can learn jj so can you

it took me a few tries, but was one of the more rewarding things i’ve picked up in a long time. it reminded me of long-ago time when i was using vimtutor on some ancient terminal-only computer 5 5 in a basement, no less. i was there for a job that involved cleaning sticker residue off of debit pinpads with isopropyl. learning vim was a really sweet perk in retrospect. i had access to: but instead of learning the motions to operate on text i learned to operate on the commit graph. it’s a reasonably small set of consistent commands and flags to pickup.

if you’re interested in getting started, my suggestion is popping open steve’s tutorial and become familiar with the basics. then run jj git init in an existing repo and try to use it. you can flip back to git in the same repo.

i often find it helpful to have a live view of the commit graph open in a terminal, so you can have some more visibility into what the operations are doing.

# a live updating view of the commit graph
watch -t -n 1 --color jj log --ignore-working-copy --color=always

# include `-s` if you want see a list of files, too
watch -t -n 1 --color jj log --ignore-working-copy --color=always -s

and if anything goes wrong, jj undo 6 6 shout out the patient folks in the jj discord generously explaining to me how to recover a repo that i thought surely was a ‘re-clone the repo’ situation lets you back up and take another try. sometimes jj undo fails or you otherwise need to go back further, in that case jj op log and jj op restore will take you anywhere back in time. it reminded me of my first time playing braid and hitting the rewind button.

chart showing git usage vs jj usage, with jj usage going from zero to over taking git between the start and end of september, and git going to near zero in the same time period

atuin history showing the vcs commands i run frequently shifting from git to jj

my original motivation was trying to recreate githubler’s claude code support in something that’s in the CLI, and i was able to do that with a project i called ‘jjagent’ . i still use jjagent all the time 7 , but learning jj itself turned out to be a lot more profound.


7 jjagent is very specific to my workflows, and i don’t think really has very wide appeal. that being said there are some parts of it i find work very well – the main one being is that it stores a Claude-session-id: ... in a git trailer so i can get back to the claude conversation that resulted in a code change. the other one being the idea of an agent working on a single changing commit that you refine (i prefer this strongly over 50x garbage commits everytime you don’t one-shot something.)

International Workshop on Plan 9: 2026 Edition

Lobsters
iwp9.org
2025-12-09 16:28:58
Comments...
Original Article
Timed out getting readerview for https://iwp9.org

Handsdown one of the coolest 3D websites

Hacker News
bruno-simon.com
2025-12-09 16:06:58
Comments...
Original Article

Bruno's Home

Welcome!

My name is Bruno Simon , and I'm a creative developer (mostly for the web).

This is my portfolio. Please drive around to learn more about me and discover the many secrets of this world.

And don't break anything!

Options

Audio
Quality
I'm stuck!
Reset
Renderer
Server
WASD or ARROWS Move around
SHIFT Boost
CTRL LEFT or B Brake
SPACE Jump
ENTER Interact
M Map
L Mute
T Post a whisper
R Respawn
NUM KEYS / NUM PAD Activate hydraulics
LEFT CLICK (DRAG) Move camera
H Honk
One finger Move the car
Two fingers Move camera / zoom
Tap (on the car) Jump
B Boost
Y Jump
X Brake
A Interact / Exit
LT L2 Accelerate
RT R2 Backward accelerate
LB / RB L1 / R1 Hydraulics
Joystick Left Turn wheels
Joystick Left (press) Honk
Joystick Right Move camera
Joystick Right (press) Zoom in/out
Select Reset
Start Pause

Achievements

/ 2h 30min 15s

Circuit

Server currently offline. Scores can't be saved.

No score yet today

Resets in

Leave a whisper

Whispers are messages left by visitors.

- Everyone can see them
- New whispers remove old ones (max 30)
- One whisper per user
- Choose a flag
- No slur!
- Max 30 characters

Server currently offline

Behind the scene

Thank you for visiting my portfolio!
If you are curious about the stack and how I built it, here’s everything you need to know.

Three.js

Three.js is the library I’m using to render this 3D world.
It was created by mr.doob ( X , GitHub ), followed by hundreds of awesome developers, one of which being Sunag ( X , GitHub ) who added TSL , enabling the use of both WebGL and WebGPU, making this portfolio possible.

Three.js Journey

If you want to learn Three.js, I got you covered with this huge course .
It contains everything you need to start building awesome stuff with Three.js (and much more).

Devlogs

I’ve been making devlogs since the very start of this portfolio and you can find them on my Youtube channel .
Even though the portfolio is out, I’m still working on the last videos so that the series is complete.

Code source

The code is available on GitHub under MIT license . Even the Blender files are there, so have fun!
For security reasons, I’m not sharing the server code, but the portfolio works without it.

Musics

The music you hear was made especially for this portfolio by the awesome Kounine ( Linktree ).
They are now under CC0 license , meaning you can do whatever you want with them!
Download them here .

Some more links

— Bruno

Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?

Hacker News
news.ycombinator.com
2025-12-09 16:02:37
Comments...
Original Article

While we will never be able to get folks to stop using AI to “help” them shape their replies, it’s super annoying to have folks think that by using AI that they’re doing others a favor. If I wanted to know what an AI thinks I’ll ask it. I’m here because I want to know what other people think.

At this point, I make value judgments when folks use AI for their writing, and will continue to do so.


As a community I think we should encourage "disclaimers" aka "I asked <AIVENDOR>, and it said...." The information may still be valuable.

We can't stop AI comments, but we can encourage good behavior/disclosure. I also think brevity should still be rewarded, AI or not.


Does it need a rule? These comments already get heavily down-voted. People who can't take a hint aren't going to read the rules.


I don't think they should be banned, I think they should be encouraged: I'm always appreciative when people who can't think for themselves openly identify themselves so that it costs me less effort to identify them.


What do you think about other low quality sources? For instance, "I checked on infowars.com, and this is what came up"? Should they be banned as well?


It depends on if you're saying "Infowars has the answer, check out this article" vs "I know this isn't a reputable source, however it's a popular source and there's an interesting debate to be had about Infowars' perspective, even if we can agree it's incorrect."


Yes. Unless something useful is actually added by the commenter or the post is about, "I asked llm x and it said y (that was unexpected)".

I have a coworker who does this somewhat often and... I always just feel like saying well that is great but what do you think? What is your opinion?

At the very least the copy paster should read what the llm says, interpret it, fact check it, then write their own response.


To me, the valuable comments are the ones that share the writer's expertise and experiences (as opposed to opinions and hypothesizing) or the ones that ask interesting questions. LLMs have no experience and no real expertise, and nobody seems to be posting "I asked an LLM for questions and it said...". Thus, LLM-written comments (whether of the form "I asked ChatGPT..." or not) have no value to me.

I'm not sure a full ban is possible, but LLM-written comments should at least be strongly discouraged.


I endorse this. Please do take whatever measures are possible to discourage it, even if it won't stop people . It at least sends a message: this is not wanted, this is not helpful, this is not constructive.


I think they should be banned, if there isnt a contribution besides what the llm answered. It's akin to 'I googled this', which is uninteresting.


I do find it useful in discussions of LLMs themselves. (Gemini did this; Claude did it too but it used to get tripped up like that).

I do wish people wouldn’t do it when it doesn’t add to the conversation but I would advocate for collective embarrassment over a ham-fisted regex.


Agreed - in fact these folks are going out of their way to be transparent about it. It's much easier to just take credit for a "smart" answer


Banning the disclosure of it is still an improvement. It forces the poster to take responsibility for what they have written, as now it is in their name.


This is what DeepSeek said:

> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged. > > 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage. > > 3. The community can self-regulate. We don't need a new rule for every type of low-effort content. > > The issue is low effort, not the tool used. Let downvotes handle it.


I find such replies to be worthless wastes of space on par with "let me google that for you" replies. If I want to know what genAI has to say about something, I can just ask it myself. I'm more interested in what the commenter has to say.

But I don't know that we need any sort of official ban against them. This community is pretty good about downvoting unhelpful comments, and there is a whole spectrum of unhelpful comments that have nothing to do with genAI. It seems impractical to overtly list them all.


There is friction to asking AI yourself. And a comment typically means that "I found the AI answer insightful enough to share".


Unfortunately it's easier to train an AI to be convincing than to be correct, so it can look insightful before it's true.

Like horoscopes, only they're not actually that bad so roll a D20 and on a set of numbers known only to the DM (and varying with domain and task length) you get a textbook answer and on the rest you get convincing nonsense.


The problem is that the AI answer could just be wrong, and there’s another step required to validate what it spit out. Sharing the conversation without fact checking it just adds noise.


Maybe I remember the Grok ones more clearly but it felt like “I asked Grok” was more prevalent than the others.

I feel like the HN guidelines could take inspiration from how Oxide uses LLMs. ( https://rfd.shared.oxide.computer/rfd/0576 ). Specifically the part where using LLMs to write comments violates the implicit social contract that the writer should put more care and effort and time into it than the reader. The reader reads it because they assume this is something a person has put more time into than they need to. LLMs break that social contract.

Of course, if it’s banned maybe people just stop admitting it.


Depends on the context.

I find myself downvoting them when I see them as submissions, and I can't think of any examples where they were good submission content; but for comments? There's enough discussion where the AI is the subject itself and therefore it's genuinely relevant what the AI says.

Then there's stuff like this, which I'd not seen myself before seeing your question, but I'd say asking people here if an AI-generated TLDR of 74 (75?) page PDF is correct, is a perfectly valid and sensible use: https://news.ycombinator.com/item?id=46164360


You can add the guideline, but then people would skip the "I asked" part and post the answer straight away. Apart from the obvious LLMesque structure of most of those bot answers, how could you tell if one has crafted the answer so much that it looks like a genuine human answer?

Obligatory xkcd https://xkcd.com/810/

AI needs more power than the grid can deliver – supersonic tech can fix that

Hacker News
boomsupersonic.com
2025-12-09 15:51:32
Comments...
Original Article

By: Blake Scholl, Founder & CEO, Boom Supersonic


It started, as many things do these days, by scrolling on X.

I was reading post after post about the power crisis hitting AI data centers—GPU racks sitting idle, waiting not on chips, but on electricity. I texted with Sam Altman—who confirmed power was indeed a major constraint. I pinged our engineering team—and found that they already had the outline of a plan to build a power turbine based on our Symphony supersonic engine.

After a few conversations, it became clear: AI didn’t just need more turbines—it needed a new and fundamentally better turbine. Symphony was the perfect new engine to accelerate AI in America. About three months later, we had a signed deal for 1.21 gigawatts and had started manufacturing the first turbine.

Today, we’re announcing Superpower, our new 42‑megawatt natural gas turbine, along with a $300M funding round and Crusoe as our launch customer. And most importantly: this marks a turning point. Boom is now on a self-funded path to both Superpower and the Overture supersonic airliner.

I want to share the real story of how this happened—and why supersonic technology is exactly what America’s energy crisis demands.

America Doesn’t Have 10–15 Years to Solve Its Power Problem the Old Way

If you’ve been paying attention, you know the U.S. is in a genuine energy crunch. GPU racks are idling because they can’t get power. Data centers are fighting over substations and interconnection queues. Meanwhile China is adding power capacity at a wartime pace—coal, gas, nuclear, everything—while America struggles to get a single transmission line permitted.

AI won’t wait for us to fix the grid. And the United States simply doesn’t have 10–15 years to build out power infrastructure the old way.

Hyperscalers have already moved to their own Plan B: behind‑the‑meter power plants. You’ve seen XAI’s Colossus I and II in Memphis. OpenAI’s Stargate I in Abilene. These projects are powered by arrays of aeroderivative natural-gas turbines—which are, fundamentally, modified jet engines from the 1970s. There’s something brilliant in this approach: the transition from gigantic “frame” turbines to arrays of mid-size “aeroderivative” turbines mirrors the computing industry’s shift from mainframes to blade servers.

The problem? The “blade servers” of the energy world are old tech and they’re sold out. Because the most popular “aeroderivative” turbines are based on subsonic jet engines, they’re happiest when the outside air temperature is -50°F—like it is when going Mach 0.8 at 30,000 feet. As outside temperatures rise, there is no option but to throttle back the engines—or else the turbine blades literally melt down. These turbines begin losing power at about 50°F and by the time it’s 110°—as often happens in popular data center locations like Texas—30% of generation capacity is lost. Nonetheless, major manufacturers all have backlogs through the rest of the decade and none is building a new-generation advanced-technology turbine.

A Supersonic Engine Core Makes the Perfect Power Turbine

When we designed the Symphony engine for Overture, we built something no one else has built this century: a brand-new large engine core optimized for continuous, high‑temperature operation.

A subsonic engine is built for short bursts of power at takeoff. A supersonic engine is built to run hard, continuously, at extreme thermal loads. Symphony was designed for Mach 1.7 at 60,000 feet, where effective temperatures reach 160°F—not the frigid -50°F conditions where legacy subsonic engines operate.

This gives Superpower several critical advantages:

  1. Full power even with high ambient heat – Where legacy turbines lose 20–30% at 110°F, Superpower maintains its full 42MW output without derate.
  2. Waterless operation – Legacy turbines need huge quantities of water for cooling to avoid thermal derate in hot environments. Superpower doesn’t. It stays at full output, water‑free.
  3. Cloud‑native control and monitoring . Superpower inherits the telemetry and operations stack we built for XB‑1. Every turbine streams real‑time performance data, supports remote control, and flags anomalies before customers ever notice.

Superpower and Symphony are based on virtually identical turbine engines. Both share the identical core (HPC and HPT) and a slightly tuned low spool. In the place of Symphony’s hollow-core titanium fan, Superpower adds two additional compressor stages plus a three-stage free power turbine connected to a high-efficiency generator on its own shaft. Additionally, the engines use slightly different fuel nozzles, Symphony’s optimized for Jet A vs. Superpower’s for natural gas.

Scaling Production the Supersonic Way: Vertical Integration

The legacy aerospace supply chain is congested. When the mission is urgent and the supply chain congested, you build the supply chain. The new Superpower Superfactory starts with a simple vision: raw materials in one side of the building, gigawatts of completed power turbine packages out the other side. We’ve already started making the first parts—and much of the production equipment to support 2GW/yr is on order. With this new financing we’re ready to accelerate further.

If America wants to build at the speed AI requires, vertical integration isn’t optional. We’re standing up our own foundry and our own large scale CNC machining capability. We’ll have more to share on the Superpower Superfactory in early 2026.

Scaling Production the Supersonic Way: Vertical Integration

Superpower is sort of like our Starlink moment, the strongest accelerant we’ve ever had toward our core mission of making Earth dramatically more accessible.

The fastest way to a certified, passenger-carrying Symphony engine is to run its core for hundreds of thousands of hours in the real world, powering Earth’s most demanding AI data centers. Every hour a Superpower turbine spins is an hour of validation for Symphony. Every gigawatt we deliver strengthens our vertical integration and manufacturing capability. And with Superpower profitability funding the remainder of the aircraft program, we’ve done something rare in aerospace: created a self-sustaining path to a new airliner.

Superpower also reminds me of what Boom is at our core: a team willing to take on what others say is impossible, to do with a small team what big companies might not even attempt.


Subscribe to the newsletter for Boom news and insights straight to your inbox.

This field is for validation purposes and should be left unchanged.


North Korean hackers exploit React2Shell flaw in EtherRAT malware attacks

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 15:43:05
A new malware implant called EtherRAT, deployed in a recent React2Shell attack, runs five separate Linux persistence mechanisms and leverages Ethereum smart contracts for communication with the attacker. [...]...
Original Article

North Korea

A new malware implant called EtherRAT, deployed in a recent React2Shell attack, runs five separate Linux persistence mechanisms and leverages Ethereum smart contracts for communication with the attacker.

Researchers at cloud security company Sysdig believe that the malware aligns with North Korea's tools used in Contagious Interview campaigns.

They recovered EtherRAT from a compromised Next.js application just two days after the disclosure of the critical React2Shell vulnerability tracked as CVE-2025-55182.

Sysdig highlights EtherRAT's mix of sophisticated features, including blockchain-based command-and-control (C2) communication, multi-layered Linux persistence, on-the-fly payload rewriting, and evasion using a full Node.js runtime.

Although there are substantial overlaps with "Contagious Interview" operations conducted by Lazarus, EtherRAT is different in several key aspects.

React2Shell is a max-severity deserialization flaw in the React Server Components (RSC) "Flight" protocol that allows unauthenticated remote code execution via a crafted HTTP request.

The flaw impacts a large number of cloud environments running React/Next.js, and its exploitation in the wild started hours after the public disclosure late last week. Some of the first threat actors leveraging it in attacks are China-linked groups Earth Lamia and Jackpot Panda .

Automated exploitation followed, and at least 30 organizations across multiple sectors were breached to steal credentials, cryptomining, and deploy commodity backdoors.

EtherRAT attack chain

EtherRAT uses a multi-stage attack chain, starting with the exploitation of React2Shell to execute a base64-encoded shell command on the target, Sysdig says .

The command attempts to download a malicious shell script ( s.sh ) with curl , wget , or python3 as fallbacks, and loops every 300 seconds until successful. When the script is fetched, it is checked, turned into an executable, and launched.

Script logic
Script logic
Source: Sysdig

The script creates a hidden directory in the user's $HOME/.local/share/ location where it downloads and extracts a legitimate Node.js v20.10.0 runtime directly from nodejs.org.

It then writes an encrypted payload blob and an obfuscated JavaScript dropper that is executed using the downloaded Node binary, and then deletes itself.

The obfuscated JavaScript dropper ( .kxnzl4mtez.js ) reads the encrypted blog, decrypts it using a hardcoded AES-256-CBC key, and writes the result as another hidden JavaScript file.

The decrypted payload is the EtherRAT implant. It is deployed using the Node.js binary that had been installed in the previous stage.

Marks of an advanced implant

EtherRAT uses Ethereum smart contracts for C2 operations, which provide operational versatility and resistance to takedowns.

It queries nine public Ethereum RPC providers in parallel and picks the majority-response result, which prevents single-node poisoning or sinkholing.

The malware sends randomized CDN-like URLs to the C2 every 500 ms and executes JavaScript returned from the operators using an AsyncFunction constructor in a mechanism that works as a fully interactive Node.js shell.

Constructing randomized URLs
Constructing randomized URLs
Source: Sysdig

North Korean hackers have used smart contracts before to deliver and distribute malware. The technique is called EtherHiding and has been described before in reports from Google and GuardioLabs .

Additionally, Sysdig researchers note that "the encrypted loader pattern used in EtherRAT closely matches the DPRK-affiliated BeaverTail malware used in the Contagious Interview campaigns."

EtherRAT persistence on Linux

Sysdig comments that the EtherRAT malware has extremely aggressive persistence on Linux systems, as it installs five layers for redundancy:

  • Cron jobs
  • bashrc injection
  • XDG autostart
  • Systemd user service
  • Profile injection

By using multiple persistence methods, the operator of the malware makes sure that they continue to have access to the compromised hosts even after system reboots and maintenance.

Another unique feature in EtherRAT is its ability to self-update by sending its source code to an API endpoint. The malware receives replacement code that has the same capabilities but uses different obfuscation, overwrites itself with it, and then spawns a new process with the updated payload.

Sysdig hypothesizes that this mechanism helps the malware evade static detection and may also help prevent analysis or introduce mission-specific functionality.

With React2Shell exploitation underway by numerous actors, system administrators are recommended to upgrade to a safe React/Next.js version as soon as possible.

Sysdig provides in its report a short list of indicators of compromise (IoCs) associated with EtherRAT's staging infrastructure and Ethereum contracts.

The researchers recommend that users check for the listed persistence mechanisms, monitor Ethereum RPC traffic, review application logs, and rotate credentials.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

CSS Wrapped 2025

Lobsters
chrome.dev
2025-12-09 15:29:26
Comments...
Original Article

# Invoker Commands

Show a <dialog> modally (and more) without JavaScript!

To open a <dialog> modally by clicking a <button> you typically need an onclick handler that calls the showModal method on that <dialog> .

<button onclick="document.querySelector('#my-dialog').showModal();">Show Dialog</button>
<dialog id="my-dialog">…</dialog>

With invoker commands–available from Chrome 135–buttons can now perform actions on other elements declaratively, without the need for any JavaScript.

<button commandfor="my-dialog" command="show-modal">Show Dialog</button>
<dialog id="my-dialog">…</dialog>

The commandfor attribute takes an ID—similar to the for attribute—while command accepts built-in values, enabling a more portable and intuitive approach.

Demo to show and hide a <dialog> and a [popover] with Invoker Commands. For browsers without support a polyfill is loaded.

Currently it is possible to send commands to [popover] s and <dialog> elements, with more types of elements possibly coming in the future. The commands to send to an element are mirrored after their JavaScript counterparts:

  • show-popover : el.showPopover()
  • hide-popover : el.hidePopover()
  • toggle-popover : el.togglePopover()
  • show-modal : dialogEl.showModal()
  • close : dialogEl.close()

It is also possible to set up custom commands to send to elements. These custom commands are prefixed by two dashes and are handled by the toggle event.

<button commandfor="some-element" command="--show-confetti">🎉</button>
document.querySelector('#some-element').addEventListener('command', (e) => {
  if (e.command === "--show-confetti") {
    // …
  }
});

A polyfill for Invoker Commands is available.

# Dialog Light Dismiss

Bringing a nice Popover API feature to <dialog> .

One of the nice features introduced by the Popover API is the light dismiss behavior of popovers. This lets users close a popover by clicking outside of the popover–on the ::backdrop –or by pressing the ESC key.

From Chrome 134, this light dismiss behavior is also available on <dialog> , through the new closedby attribute which controls the behavior:

  • <dialog closedby="none"> : No user-triggered closing of dialogs at all. This is the default behavior.
  • <dialog closedby="closerequest"> : Pressing ESC (or other close trigger) closes the dialog
  • <dialog closedby="any"> : Clicking outside the dialog, or pressing ESC, closes the dialog. Similar to popover="auto" behavior.
Demo that compares behavior of the various values for closedby .

# popover=hint

Ephemeral popovers that don't close others.

Hint popovers with popover="hint" are a new type of HTML popover designed for ephemeral layered UI patterns, such as tooltips or link previews. Opening a hint popover does not close other open auto or manual popovers, allowing layered UI elements to coexist. Hint popovers can also exist on links ( <a> tags), unlike auto and manual popovers which require activation from button elements.

Set this up like any other popover:

<button interestfor="callout-1"></button>
<div id="callout-1" popover=hint>
  Product callout information here.
</div>

Using popover="hint" combined with interest invokers (the [interestfor] attribute), make it much easier to build layered UI elements like tooltips, hover cards, and previews declaratively in HTML and CSS, without complex JavaScript workarounds. This pairing allows for dual-purpose interaction patterns (for example, hover to preview, click to navigate) and better management of multiple on-screen layers.

# Customizable select

You can finally style HTML select elements with CSS.

The time has finally arrived: you can now fully customize the HTML <select> element using CSS!

To get started, apply the appearance: base-select CSS property to your <select> element. This will switch it to a new, minimal state that's optimized for customization.

select {
&::picker(select) {
    appearance: base-select;
  }
}

Using base-select unlocks several powerful features including complete CSS customization. Every part of the select element, including the button, the dropdown list, and the options, can be styled with CSS. You can change colors, fonts, spacing, and even add animations to create a unique look and feel that matches your site's design.

Customizable select diagram showing the various new parts

The dropdown list of options ( ::picker(select) ) is rendered in the top-layer of the page. This means that it appears above all other content without being clipped by parent containers. The browser also automatically handles positioning and flipping the dropdown based on available space in the viewport.

The new select also enables you to include and properly render HTML elements like <img> and <span> directly inside of the <option> elements. This means you can do something as simple as adding flag icons next to a country picker, or something as complex as creating a profile selection where you can see an icon, name, email, and ID. As long as you are not including interactive elements such as links, which are not allowed inside of customizable selects, you get full control over creating visually rich dropdown menus.

Another neat thing you can do with customizable select is use the new <selectedcontent> element . This element reflects the HTML content of the selected option. For complex selects, setting display: none on specific elements within <selectedcontent> lets you show part of the option content in the select button, or even just an icon to represent the selection. In the monster picker, you can hide the monster skills description by setting:

selectedcontent .description {
  display: none;
}

# ::scroll-marker/button()

Carousel scroll affordances with native CSS pseudo-elements.

This year, creating carousels and other scrolling experiences in CSS became much easier with the introduction of two new pseudo-elements: ::scroll-button() and ::scroll-marker() . These features let you create native, accessible, and performant carousels with just a few lines of CSS, no JavaScript required.

A carousel is essentially a scrollable area with added UI affordances for navigation: buttons to scroll back and forth, and markers to indicate the current position and allow direct navigation to a specific item.

Illustration showing a horizontal carousel with left and right scroll buttons on either side, and scroll markers below the carousel indicating the current position.

The ::scroll-button() pseudo-element creates browser-provided, stateful, and interactive scroll buttons. These buttons are generated on a scroll container and can be styled with CSS. They behave like regular <button> elements, are focusable, and are automatically disabled when scrolling is no longer possible in a given direction.

You can create buttons for any scroll direction: left , right , up , or down , as well as logical directions like block-start and inline-end . When a scroll button is activated, it scrolls the container by approximately 85% of its visible area.

.carousel::scroll-button(left) {
  content: "⬅" / "Scroll Left";
}

.carousel::scroll-button(right) {
  content: "⮕" / "Scroll Right";
}
Illustration showing a horizontal carousel with left and right scroll buttons on either side, and scroll markers below the carousel indicating the current position.

The ::scroll-marker pseudo-element represents a marker for an element within a scroll container. These markers are grouped in a ::scroll-marker-group and behave like anchor links, letting users jump directly to a specific item in the scroller. This is useful for creating dot navigation for a carousel or a table of contents for a long document.

Like ::scroll-button() , ::scroll-marker s are fully stylable with CSS. You can use images, text, or even counters to create a variety of marker styles. Additionally, the :target-current pseudo-class styles the active ("current") marker that aligns with the currently-scrolled-to item.

.carousel {
  scroll-marker-group: after;
}

.carousel > li::scroll-marker {
  content: ' ';
  width: 1em;
  height: 1em;
  border: 1px solid black;
  border-radius: 50%;
}

.carousel > li::scroll-marker:target-current {
  background: black;
}

Here is a demo that combines both ::scroll-button() and ::scroll-marker() to create a simple carousel:

Here is a more complex carousel that makes use of anchor positioning and scroll state queries:

# scroll-target-group

Turn a list of anchor links into connected scroll-markers.

In addition to the ::scroll-button() and ::scroll-marker pseudo-elements, CSS carousels includes another neat feature: scroll-target-group . This designates an element as a container for a group of navigation items, like a table of contents. Use this to transform a manually-created list of anchor links into scroll-markers which can be used to navigate the page.

Pair scroll-target-group with the :target-current pseudo-class to style the anchor element whose target is currently visible. This gives you the power of ::scroll-marker from the CSS Carousel API, but with the flexibility of using your own HTML elements for the markers, giving you much more control over their styling and content.

To create a scroll-spy navigation, you need two things: A list of anchor links that point to different sections of your page. The scroll-target-group: auto property applied to the container of those links.

The following example creates a "scroll-spy" highlighting where you are on a page in an overview, or table of contents.

<nav class="toc">
  <ul>
    <li><a href="#section-1">Section 1</a></li>
    <li><a href="#section-2">Section 2</a></li>
    <li><a href="#section-3">Section 3</a></li>
  </ul>
</nav>

<main>
  <section id="section-1">...</section>
  <section id="section-2">...</section>
  <section id="section-3">...</section>
</main>

The following CSS creates the scroll-target-group, then styles the table of contents. The link corresponding to the section currently in view will be red and bold.

.toc {
  scroll-target-group: auto;
}

.toc a:target-current {
  color: red;
  font-weight: bold;
}

# Anchored container queries

Style elements based on their anchor position.

Last year's CSS Wrapped covered CSS anchor positioning: an exciting update that changes the way you can position elements relative to each other. And since that coverage, it became a part of Interop 2025 , and browser support expanded.

However, while CSS could move an element to a fallback position, it had no way of knowing which fallback was chosen. This meant that if your tooltip flipped from the bottom to the top of the screen, the arrow would still be pointing the wrong way. This is now resolved with anchored container queries.

Anchor queries can be created with two steps:

  • First, apply container-type: anchored to the positioned element, like your tooltip. This enables the element to be "aware" of its anchor position fallback.
  • Next, use the anchored(fallback: ...) function within an @container block to style any child of your positioned element based on the active fallback value.

When you specify a fallback value, it can either be a custom fallback that you name and specify, or it can be one of the browser defaults like flip-block , or flip-inline .

Here's a quick demo of how you can use anchored container queries to automatically flip a tooltip's arrow when its position changes:

/* The element our tooltip is anchored to */
  .anchor {
    anchor-name: --my-anchor;
  }

  /* The positioned element (tooltip) */
  .tooltip {
    position: fixed;
    position-anchor: --my-anchor;
    position-area: bottom;
    /* Reposition in the block direction */
    position-try-fallbacks: flip-block; 

    /* Make it an anchored query container */
    container-type: anchored;

    /* Add a default "up" arrow */
    &::before {
      content: '▲';
      position: absolute;
      /* Sits on top of the tooltip, pointing up */
      bottom: 100%;
    }
  }

  /* Use the anchored query to check the fallback */
  @container anchored(fallback: flip-block) {
    .tooltip::before {
      /* The 'top' fallback was used, so flip the arrow */
      content: '▼';
      bottom: auto;
      /* Move the arrow below the tooltip */
      top: 100%;
    }
  }

This is a huge win for anchor positioning and component libraries, enabling more robust and self-contained UI elements with less code.

# Interest invokers

Declarative interest-triggered UI with interestfor .

Hover and focus-triggered UI is everywhere on the web, from tooltips to rich hovercards and page previews. While this pattern often works well for mouse users, it can be inaccessible to other modalities like touchscreen. Additionally developers have to manually implement the logic for each input type, leading to inconsistent experiences.

The new interestfor attribute solves this by providing a native, declarative way to style an element when users "show interest" in it without fully activating it. It's invoked similarly to the commandfor attribute , but, instead of a click, interestfor is activated when a user "shows interest" in an element, such as by hovering over it with a mouse or focusing it with a keyboard. When paired with popover="hint" , it becomes incredibly easy to create layered UI elements like tooltips and hovercards without any custom JavaScript.

<button interestfor="callout-1"></button>

<div id="callout-1" popover="hint">
  Product callout information here.
</div>

Note: Unlike command invokers, which only work on button elements, interest invokers can be set on links ( <a> tags) as well as buttons.

Here’s a demo that uses interestfor to create product callouts on an image. Hovering over the buttons on the image will reveal more information about each product.

Interest Delays

One additional new feature that landed with interest invokers is the ability to set interest-delays. This prevents an interest-invoked element from getting triggered too prematurely. You can set a delay to both open and close the interest invoker using the interest-delay property, which accepts a time-based value. 0.5 seconds is the default, but you can speed it up, for example, by doing:

/* applies an updated delay timing value on the interest-invoking button */
[interestfor] {
  interest-delay: 0.2s;
}

# Scroll-state queries

Style descendants based on whether something is scrollable, stuck, or snapped.

To determine if an element is stuck, snapped, or scrollable you could use a bunch of JavaScript … which isn’t always easy to do because you have to attach timeouts to scroll events and so on.

Thanks to scroll-state queries–available from Chrome 133–you can use CSS to declaratively, and more performantly, style elements in these states.

Recording of the demo. When an item is snapped, it gets styled differently.

To use a scroll-state query declare container-type: scroll-state on an element.

.parent {
  container-type: scroll-state;
}

Once you have that in place, children of that element can then query whether that element is in a certain scroll-state:

  • Stuck state: when the element is stuck.
  • Snapped state: when the element is snapped.
  • Scrollable state: when the element is overflowing.

For example, to style the snapped element differently, use the snapped scroll-state-query:

.scroller {
  overflow-x: scroll;
  scroll-snap-type: x mandatory;

  > div {
    container-type: scroll-state;
    scroll-snap-align: center;

    @supports (container-type: scroll-state) {
      > * {
        transition: opacity .5s ease;

        @container not scroll-state(snapped: x) {
          opacity: .25;
        }
      }
    }
  }
}
A demo that highlights the currently snapped item. Other, non-snapped, items have a reduced opacity.

# Tree counting functions

Staggered animations, anyone?

The usual method to create staggered animations for list items, where each item appears sequentially, requires you to count DOM elements and hard-code these values into custom properties (for example, --index: 1; , --index: 2; ) using :nth-child selectors. This method is cumbersome, fragile, and not scalable, especially when the number of items changes dynamically.

The new sibling-index() and sibling-count() functions make your life easier here, as these functions provide native awareness of an element's position among its siblings. The sibling-index() function returns a 1-based integer representing the element's position, while sibling-count() returns the total number of siblings.

These let you write concise, mathematical formulas for layouts and animations that automatically adapt to the number of elements in the DOM.

li {
  /* Create a staggered delay. */
  /* We subtract 1 because sibling-index() starts at 1, */
  /* ensuring the first item starts immediately (0s). */
  transition: opacity 0.25s ease, translate 0.25s ease;
  transition-delay: calc(0.1s * (sibling-index() - 1));
	
  @starting-style {
    opacity: 0;
    translate: 1em 0;
  }
}
Demo showing a staggered entry animation on the 4 images. Hit the shuffle button to randomize the order.
Recording of the demo.

# scrollIntoView() container

Sometimes, scrolling only the nearest ancestor scroller is all you want.

The container option for Element.scrollIntoView lets you perform a scrollIntoView only scrolling the nearest ancestor scroll container. This is extremely useful if you have nested scroll containers. With the option set to "nearest" , calling scrollIntoView won’t scroll all of the scroll containers to the viewport.

slideList.addEventListener('click', (evt) => {
  // scrollIntoView will automatically determine the position.
  evt.target.targetSlide.scrollIntoView({container: 'nearest', behavior: 'smooth'});
});
Recording showing a scrollIntoView action without and with container set to "nearest"
Demo featuring a JavaScript-based carousel that uses scrollIntoView to scroll to the specific slide in the carousel. Use the toggle at the top left to control whether container: "nearest" should be used or not.

# Nested View Transition Groups

Retain 3D and clipping effects while running a view transition.

Nested view transition groups is an extension to view transitions that lets you nest ::view-transition-group pseudo-elements within each other.

When view transition groups are nested, instead of putting them all as siblings under a single ::view-transition pseudo-element, it's possible to retain 3D and clipping effects during the transition.

To nest ::view-transition-group elements in another group, use the view-transition-group property on either the parent or children.

.card {
  view-transition-name: card;
  overflow: clip;
}

.card img {
  view-transition-name: photo;
  view-transition-group: nearest;
}

The nested groups get placed inside a new ::view-transition-group-children(…) pseudo-element in the tree. To reinstate the clipping used in the original DOM, apply overflow: clip on that pseudo-element.

::view-transition-group-children(card) {
  overflow: clip;
}
Demo for Nested View Transition Groups. Without nested view transition groups, the avatar and name don't rotate along with the card. But when the option is checked, the 3D effect can be restored.

For browsers with no support, check out this recording:

Recording of the demo showing the demo. It shows the behavior without and with nested view transition groups.

# DOM State-Preserving Move

Move iframes and videos across the DOM without reloading them.

Using insertBefore to move an element in the DOM is destructive. If you move a playing video or an iframe using insertBefore , it reloads and loses its state completely.

However, from Chrome 133, you can use moveBefore . It works exactly like insertBefore , but it keeps the element alive during the move.

const $newSibling = getRandomElementInBody();
const $iframe = document.querySelector('iframe');
document.body.moveBefore($iframe, $newSibling);

This means videos keep playing, iframes don't reload, CSS animations don’t restart, and input fields keep their focus—even while you are actively reparenting them across your layout.

Demo to compare behavior of insertBefore and moveBefore .

For browsers with no support, check out this recording:

Recording of the demo showing a YouTube embed that is playing. When the iframe gets moved with moveBefore , the video keeps playing. When it gets moved with insertBefore , the iframe reloads.

# Advanced attr() function

Typed values for attr() beyond simple strings.

The CSS attr() function , which lets you use the value of an HTML attribute within your CSS, has been powered-up.

Previously, attr() could only be used within the content property of pseudo-elements and could only return values as a CSS string. The updated attr() function expands its capabilities, allowing attr() to be used with any CSS property, including custom properties. It can now also parse attribute values into various data types beyond just strings, like colors, lengths, and custom identifiers.

With the new attribute, you can set an element's color property based on a data-color attribute, parsing it as a <color> type with a fallback.

div {
  color: attr(data-color type(<color>), red);
}
A star shape filled partially with gold color to represent a rating, with the numeric rating shown in the center.

To solve a common UI challenge, you can dynamically set the view-transition-name for multiple elements using their id attribute, parsed as a <custom-ident> . This avoids repetitive CSS rules for each element.

.card {
  /* card-1, card-2, card-3, etc. */
  view-transition-name: attr(id type(<custom-ident>), none);
  view-transition-class: card;
}

Finally, this demo shows how to use the attr() function in multiple ways. First use the data-rating to determine a percent-fill to visually fill the star mask and represent the rating. Then use the same data attribute in the content property to insert the value in a pseudo-element.

.star-rating {
  --percent-fill: calc(attr(data-rating type(<number>)) * 20%);
  /*  hard breakpoint gradient background  */
  background: linear-gradient(to right, gold var(--percent-fill), transparent var(--percent-fill));
  
  &::after {
    content: attr(data-rating);
  }
}

# ToggleEvent.source

Find out which element was responsible for toggling the target.

When a popover, <dialog> , or <details> element gets toggled, it can be interesting to know which element was responsible for toggling it. For example, knowing if the user pressed the “Accept Cookies” or “Reject Cookies” button to dismiss a cookie banner is a very important detail.

The source attribute of the ToggleEvent lets you know exactly that, as it contains the element which triggered the event to be fired, if applicable. Based on that source you can take different actions.

<div id="cookiebanner" popover="auto">
 <p>Would you like a cookie?</p>
 <button id="yes" commandfor="cookiebanner" command="hide-popover">Yes</button>
 <button id="no" commandfor="cookiebanner" command="hide-popover">No</button>
</div>

<script>
 const $btnYes = document.getElementById('yes');
 const $btnNo = document.getElementById('no');
 const $cookiebanner = document.getElementById('cookiebanner');


 $cookiebanner.addEventListener('toggle', event => {
   if (event.source == $btnYes) {
     // Give the user a cookie
   } else if (event.source == $btnNo) {
     // Don't give the user a cookie
   }
 });
</script>
Cookie banner demo that uses ToggleEvent.source . The demo also uses Invoker Commands

# text-box features

Flawlessly center text vertically.

A font’s content box is defined by internal metrics—specifically the ascent and descent that reserve space for accents and hanging characters.

Illustration showing the ascender and descender line of a typeface.
Illustration showing the ascender and descender line of a typeface. (Source: Material Design )

Because the visual boundaries of Latin text are the cap height and the alphabetic baseline, rather than the ascent and descent, text will appear optically off-center even when it is mathematically centered within a container.

Illustration showing the cap height of a typeface. Illustration showing the baseline of a typeface.
Illustrations showing the cap height and baseline of a typeface. (Source: Material Design )

The text-box properties make finer control of vertical alignment of text possible, letting you flawlessly center text vertically. The text-box-trim property specifies the sides to trim, above or below (or both), and the text-box-edge property specifies the metrics to use for text-box-trim effects.

When trimming both edges and setting the over edge metric to cap and the under edge metric to alphabetic , text will be visually centered.

h1, button {
  text-box: trim-both cap alphabetic;
}
Interactive CSS text-box demo

# shape() function

CSS function for complex, responsive shapes.

The new shape() function lets you clip an element to a complex, non-polygonal, responsive shape in CSS. This is a great option for clipping masks using clip-path: path() , and works seamlessly with CSS custom properties to define coordinates and control points, making it more maintainable than SVG shapes. This also means you can animate your custom properties within shape() to create dynamic and interactive clipping.

Flag shape screenshot.

Here's how to create a flag shape with curved top and bottom edges using shape() :

.flag {
  clip-path: shape(from 0% 20px,
    curve to 100% 20px with 25% 0% / 75% 40px,
    vline to calc(100% - 20px),
    curve to 0% calc(100% - 20px) 
      with 75% 100% / 25% calc(100% - 40px),
    close
  );
}

In this example, the horizontal coordinates use percentages to scale with the element's width, while the vertical coordinates for the curve's height use fixed pixel values, creating a responsive effect where the flag's wave remains constant regardless of the element's size.

Another example here uses a blob generator for shape() to create a fun frame effect:

Blog shape screenshot.

# if() statements

Conditionals in your CSS for dynamic styling.

The if() function in CSS lets you set different values for a property based on a conditional test. Think of it like a ternary operator in JavaScript, but for your stylesheets. It provides a cleaner and more concise way to handle dynamic styling compared to writing multiple, verbose @media or @supports blocks for single property changes.

The syntax is straightforward. The if() function takes a series of condition-value pairs, separated by semicolons. The first condition that evaluates to true will have its corresponding value applied. You can also provide an else fallback value.

if(condition-1: value-1; condition-2: value-2; else: fallback-value);

Currently, if() can be used with three types of queries:

  • media() : For media queries.
  • supports() : For feature queries.
  • style() : For style queries.

One example of using if() is creating inline media queries. This allows you to adjust styling for different viewport sizes or device capabilities without writing separate @media blocks.

For example, you can create a responsive layout that changes from a column to a row based on viewport orientation:

.responsive-layout {
  display: flex;
  flex-direction: if(media(orientation: landscape): row; else: column);
}

This approach is more concise than a traditional media query, which requires you to define the styles in two separate places. With if(), you can keep the logic for a single property in one place, making your CSS easier to read and maintain. Change the orientation of the layout in this CodePen by opening the CSS or HTML side pane:

# Custom Functions

Reusable functions for cleaner, maintainable styles.

CSS custom functions are a fantastic new addition to the CSS language, and make it much easier to write composable, reusable, and clear functional styling logic. A custom function is made up of the @function statement, a function name prefixed with a double dash ( -- ), a series of arguments, and a result block. The arguments can also have default, or fallback, values.

An example of a simple CSS function is the "negate" function which returns the inverse value of a number:


/* Negate function returns the negative of a value */
@function --negate(--value) {
  result: calc(-1 * var(--value));
} 

/* Usage */
html {
  --gap: 1em;
  padding: --negate(var(--gap));
}

There are many ways you can use functions in CSS. Ultimately, we'll likely see new patterns emerge. For example, you might store CSS utilities in a utils.css file that contains multiple functions. One of my favorite CSS functions is the conditionally rounded border radius. The following function removes an element's border-radius when it gets within a specified distance of the viewport edge (defaulting to 4px), otherwise applying the desired radius. You can provide one argument for the radius, or a second to override the edge distance:

/* Conditionally apply a radius until you are (default: 4px, or specify second argument) from the edge of your screen */
@function --conditional-radius(--radius, --edge-dist: 4px) {
  result: clamp(0px, ((100vw - var(--edge-dist)) - 100%) * 1e5, var(--radius));
}

/* usage */
.box {
  /*  1rem border radius, default (4px) distance  */
  border-radius: --conditional-radius(1rem);
}

.box-2 {
  /*  1rem border radius, right at the edge (0px distance)  */
  border-radius: --conditional-radius(1rem, 0px);
}

# Expanded range syntax

Range syntax in style queries and if() statements.

One nice update that landed this year is the ability to use range syntax in style queries and if() statements. Media queries and container queries already supported this capability, but before Chrome 142, style queries required an exact value match, like @container style(--myVal: true) .

Now, you can type your values and use them with comparison operators like < , > , <= , and >= . This enables many new architectural capabilities directly in your CSS.

The following demo uses stylized cards to visualize the daily weather. The HTML markup includes data, such as the chance of rain, which is indicated by the value of data-rain-percent.

<li class="card-container" style="--cloudy: true;" data-rain-percent="60%">
  <div class="weather-card">
    <h2>Today's Weather</h2>
    <p>Chance of rain: 60%</p>
  </div>
</li>

In CSS, convert data-rain-percent into a custom property, give it a type using attr() , and then use it within a range style query:

.card-container {
  container-name: weather;
  --rain-percent: attr(data-rain-percent type(<percentage>));
}

@container style(--rain-percent > 45%) {
  .weather-card {
    background: linear-gradient(140deg, blue, lightblue);
  }
}

Now, if the chance of rain is greater than 45%, the card will get a blue background.

Weather cards with different background colors and icons based on the weather and chance of rain.

Range queries can also be used in if() statements now as well, meaning more concise phrasing for styles. For example, you can write the above code even more concisely using inline if() :

.weather-card {
  background: if(style(--rain-percent > 45%): blue; else: gray;
  );
}

# Stretch sizing keyword

Make an element fill its containing block, regardless of the box-sizing .

The stretch keyword is a keyword for use with CSS sizing properties (such as width and height ) that lets elements grow to exactly fill their containing block's available space.

It’s similar to 100% , except the resulting size is applied to the margin box of the element instead of the box determined by box-sizing .

.element {
  height: stretch;
}

Using this keyword lets the element keep its margins while still being as large as possible.

Demo to compare behavior of height being set to auto , 100vh , 100% , or stretch .

# corner-shape

Corner shapes beyond rounded edges.

This year, CSS gives us more control over the shape of our elements with the new corner-shape property. This experimental feature lets you customize the shape of corners beyond the standard rounded corners available with border-radius .

You can now create a variety of corner styles, including:

  • round
  • bevel
  • notch
  • scoop
  • squircle

This property opens up a world of creative possibilities. From flower-like shapes to hexagonal grids , and even enabling a simple squircle; this CSS feature is small but mighty. You can even animate between different corner shapes for dynamic and engaging user interfaces, making this a great option for hover effects and interest states.

CSS flower shape made with corner-shape property.
.top-left-petal {
    grid-area: 1 / 1 / 3 / 3;
    corner-shape: round round scoop;
    border-radius: 50%;
  }

For even more control, you can use the superellipse() function to create any continuous curve, allowing for fine-tuned and unique corner designs.

Ransomware IAB abuses EDR for stealthy malware execution

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 15:24:00
An initial access broker tracked as Storm-0249 is abusing endpoint detection and response solutions and trusted Microsoft Windows utilities to load malware, establish communication, and persistence in preparation for ransomware attacks. [...]...
Original Article

Ransomware IAB abuses EDR for stealthy malware execution

An initial access broker tracked as Storm-0249 is abusing endpoint detection and response solutions and trusted Microsoft Windows utilities to load malware, establish communication, and persistence in preparation for ransomware attacks.

The threat actor has moved beyond mass phishing and adopted stealthier, more advanced methods that prove effective and difficult for defenders to counter, even if well documented.

In one attack analyzed by researchers at cybersecurity company ReliaQuest, Storm-0249 leveraged the SentinelOne EDR components to hide malicious activity. However, researchers say that the same method works with other EDR products, as well.

SentinelOne EDR abuse

ReliaQuest says that the Storm-0249 attack started with ClickFix social engineering that tricked users into pasting and executing curl commands in the Windows Run dialog to download a malicious MSI package with SYSTEM privileges.

A malicious PowerShell script is also fetched from a spoofed Microsoft domain, which is piped straight onto the system's memory, never touching the disk and thus evading antivirus detection.

The MSI file drops a malicious DLL (SentinelAgentCore.dll). According to the researchers, "this DLL is placed strategically alongside the pre-existing, legitimate SentinelAgentWorker.exe, which is already installed as part of the victim's SentinelOne EDR."

Next, the attacker loads the DLL using the signed SentinelAgentWorker (DLL sideloading), executing the file within the trusted, privileged EDR process and obtaining stealthy persistence that survives operating system updates.

"The legitimate process does all the work, running the attacker's code, appearing as routine SentinelOne activity to security tools and bypassing detection," explains ReliaQuest .

Signed executable side-loading the malicious DLL
Signed executable side-loading the malicious DLL
Source: ReliaQuest

Once the attacker gains access, they use the SentinelOne component to collect system identifiers through legitimate Windows utilities like reg.exe and findstr.exe , and to funnel encrypted HTTPS command-and-control (C2) traffic.

Registry queries and string searches would normally raise alarms, but when conducted from within a trusted EDR process, they are treated as routine and ignored by security mechanisms.

ReliaQuest explains that the compromised systems are profiled using 'MachineGuid,' a unique hardware-based identifier that ransomware groups like LockBit and ALPHV use for binding encryption keys to specific victims.

This suggests that Storm-0249 conducts initial access compromises tailored to the needs of its typical customers, ransomware affiliates.

The abuse of trusted, signed EDR processes bypasses nearly all traditional monitoring. The researchers recommend that system administrators rely on behavior-based detection that identifies trusted processes loading unsigned DLLs from non-standard paths.

Furthermore, it is helpful to set stricter controls for curl, PowerShell, and LoLBin execution.

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

[$] Bazzite: a gem for Linux gamers

Linux Weekly News
lwn.net
2025-12-09 15:18:16
One of the things that has historically stood between Linux and the fabled "year of the Linux desktop" is its lack of support for video games. Many users who would have happily abandoned Windows have, reluctantly, stayed for the video games or had to deal with dual booting. In the past few years, th...
Original Article

The page you have tried to view ( Bazzite: a gem for Linux gamers ) is currently available to LWN subscribers only.

Reader subscriptions are a necessary way to fund the continued existence of LWN and the quality of its content.

If you are already an LWN.net subscriber, please log in with the form below to read this content.

Please consider subscribing to LWN . An LWN subscription provides numerous benefits, including access to restricted content and the warm feeling of knowing that you are helping to keep LWN alive.

(Alternatively, this item will become freely available on December 18, 2025)

Apple's Slow AI Pace Becomes a Strength as Market Grows Weary of Spending

Hacker News
finance.yahoo.com
2025-12-09 15:08:24
Comments...
Original Article

(Bloomberg) -- Shares of Apple Inc. were battered earlier this year as the iPhone maker faced repeated complaints about its lack of an artificial intelligence strategy. But as the AI trade faces increasing scrutiny, that hesitance has gone from a weakness to a strength — and it’s showing up in the stock market.

Through the first six months of 2025, Apple was the second-worst performer among the Magnificent Seven tech giants, as its shares tumbled 18% through the end of June. That has reversed since then, with the stock soaring 35%, while AI darlings like Meta Platforms Inc. and Microsoft Corp. slid into the red and even Nvidia Corp. underperformed. The S&P 500 Index rose 10% in that time, and the tech-heavy Nasdaq 100 Index gained 13%.

Most Read from Bloomberg

“It is remarkable how they have kept their heads and are in control of spending, when all of their peers have gone the other direction,” said John Barr, portfolio manager of the Needham Aggressive Growth Fund, which owns Apple shares.

As a result, Apple now has a $4.1 trillion market capitalization and the second biggest weight in the S&P 500, leaping over Microsoft and closing in on Nvidia. The shift reflects the market’s questioning of the hundreds of billions of dollars Big Tech firms are throwing at AI development, as well as Apple’s positioning to eventually benefit when the technology is ready for mass use.

“While they most certainly will incorporate more AI into the phones over time, Apple has avoided the AI arms race and the massive capex that accompanies it,” said Bill Stone, chief investment officer at Glenview Trust Company, who owns the stock and views it as “a bit of an anti-AI holding.”

Of course, the rally has made Apple’s stock pricier than it has been in a long time. The shares are trading for around 33 times expected earnings over the next 12 months, a level they’ve only hit a few times in the past 15 years, with a high of 35 in September 2020. The stock’s average multiple over that time is less than 19 times. Apple is now the second most expensive stock in the Bloomberg Magnificent Seven Index, trailing only Tesla Inc.’s whopping valuation of 203 times forward earnings. Apple’s shares climbed about 0.5% in early Tuesday trading.

“It’s really hard to see how the stock can continue to compound value at a level that makes this a compelling entry point,” said Craig Moffett, co-founder of research firm MoffettNathanson. “The obvious question is, are investors overpaying for Apple’s defensiveness? We think so.”

Man Charged for Wiping Phone Before CBP Could Search It

403 Media
www.404media.co
2025-12-09 15:04:59
The exact circumstances around the search are not known. But activist Samuel Tunick is charged with deleting data from a Google Pixel before CBP’s Tactical Terrorism Response Team could search it....
Original Article

A man in Atlanta has been arrested and charged for allegedly deleting data from a Google Pixel phone before a member of a secretive Customs and Border Protection (CBP) unit was able to search it, according to court records and social media posts reviewed by 404 Media. The man, Samuel Tunick, is described as a local Atlanta activist in Instagram and other posts discussing the case.

The exact circumstances around the search—such as why CBP wanted to search the phone in the first place—are not known. But it is uncommon to see someone charged specifically for wiping a phone, a feature that is easily accessible in some privacy and security-focused devices.

💡

Do you know anything else about this case? I would love to hear from you. Using a non-work device, you can message me securely on Signal at joseph.404 or send me an email at joseph@404media.co.

The indictment says on January 24, Tunick “did knowingly destroy, damage, waste, dispose of, and otherwise take any action to delete the digital contents of a Google Pixel cellular phone, for the purpose of preventing and impairing the Government’s lawful authority to take said property into its custody and control.” The indictment itself was filed in mid-November.

Tunick was arrested earlier this month, according to a post on a crowd-funding site and court records. “Samuel Tunick, an Atlanta-based activist, Oberlin graduate, and beloved musician, was arrested by the DHS and FBI yesterday around 6pm EST. Tunick's friends describe him as an approachable, empathetic person who is always finding ways to improve the lives of the people around him,” the site says . Various activists have since shared news of Tunick’s arrest on social media.

The indictment says the phone search was supposed to be performed by a supervisory officer from a CBP Tactical Terrorism Response Team. The American Civil Liberties Union (ACLU) wrote in 2023 these are “highly secretive units deployed at U.S. ports of entry, which target, detain, search, and interrogate innocent travelers.”

“These units, which may target travelers on the basis of officer ‘instincts.’ raise the risk that CBP is engaging in unlawful profiling or interfering with the First Amendment-protected activity of travelers,” the ACLU added. The Intercept previously covered the case of a sculptor and installation artist who was detained at San Francisco International Airport and had his phone searched. The report said Gach did not know why, even years later.

Court records show authorities have since released Tunick, and that he is restricted from leaving the Northern District of Georgia as the case continues.

The prosecutor listed on the docket did not respond to a request for comment. The docket did not list a lawyer representing Tunick.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

New Pebble Device

Hacker News
repebble.com
2025-12-09 15:03:09
Comments...
Original Article

Catch your best ideas before they slip through your fingers

Do you ever have flashes of insight or an idea worth remembering? This happens to me 5-10 times every day. If I don’t write down the thought immediately, it slips out of my mind. Worst of all, I remember that I’ve forgotten something and spend the next 10 minutes trying to remember what it is. So I invented external memory for my brain.

Introducing Pebble Index 01 - a small ring with a button and microphone. Hold the button, whisper your thought, and it’s sent to your phone. It’s added to your notes, set as a reminder, or saved for later review.

Index 01 is designed to become muscle memory, since it’s always with you. It’s private by design (no recording until you press the button) and requires no internet connection or paid subscription. It’s as small as a wedding band and comes in 3 colours. It’s made from durable stainless steel and is water-resistant. Like all Pebble products, it’s extremely customizable and built with open source software.

Here’s the best part: the battery lasts for years. You never need to charge it.

Pre-order today for $75 . After worldwide shipping begins in March 2026, the price will go up to $99.

# Design

Now that I’ve worn my Index 01 for several months, I can safely say that it has changed my life - just like with Pebble, I couldn’t go back to a world without this. There are so many situations each day where my hands are full (while biking or driving, washing dishes, wrangling my kids, etc) and I need to remember something. A random sampling of my recent recordings:

  • Set a timer for 3pm to go pick up the kids
  • Remind me to phone the pharmacy at 11am
  • Peter is coming by tomorrow at 11:30am, add that to my calendar
  • Jerry recommends reading Breakneck
  • Mark wants a Black/Red PT2

Before, I would take my phone out of my pocket to jot these down, but I couldn’t always do that (eg, while bicycling). I also wanted to start using my phone less, especially in front of my kids.

Initially, we experimented by building this as an app on Pebble, since it has a mic and I’m always wearing one. But, I realized quickly that this was suboptimal - it required me to use my other hand to press the button to start recording (lift-to-wake gestures and wake-words are too unreliable). This was tough to use while bicycling or carrying stuff.

Then a genius electrical engineer friend of mine came up with an idea to fit everything into a tiny ring. It is the perfect form factor! Honestly, I’m still amazed that it all fits.

The design needed to satisfy several critical conditions:

  1. Must work reliably 100% of the time. If it didn’t work or failed to record a thought, I knew I would take it off and revert back to my old habit of just forgetting things.
  2. It had to have a physical press-button, with a satisfying click-feel. I want to know for sure if the button is pressed and my thought is captured.
  3. Long battery life - every time you take something off to charge, there’s a chance you’ll forget to put it back on.
  4. Must be privacy-preserving. These are your inner thoughts. All recordings must be processed and stored on your phone. Only record when the button is pressed.
  5. It had to be as small as a wedding band. Since it’s worn on the index finger, if it were too large or bulky, it would hit your phone while you held it in your hand.
  6. Water resistance - must be able to wash hands, shower, and get wet.

We’ve been working on this for a while, testing new versions and making tweaks. We’re really excited to get this out into the world.

Here are a few of my favourite things about Index 01:

  • It does one thing really well - it helps me remember things.
  • It’s discreet. It's not distracting. It doesn't take you out of the moment.
  • There’s no AI friend persona and it’s not always recording.
  • It’s inexpensive. We hope you try it and see if you like it as well!

# Key Details

  • Available in 3 colours and 8 sizes
    • Colours: polished silver, polished gold, and matte black
    • US ring sizes: 6, 7, 8, 9, 10, 11, 12, 13
    • You can pre-order now and pick your size/colour later before your ring ships.
  • Cost and availability: Pre-order price is $75, rises to $99 later. Ships worldwide, beginning in March.
  • Works with iPhone and Android : We overcame Apple’s best efforts to make life terrible for 3rd party accessory makers and have Index 01 working well on iOS and Android.
  • Extremely private and secure : Your thoughts are processed by open source speech-to-text (STT) and AI models locally on your phone. You can read the code and see exactly how it works - our Pebble mobile app is open source. Higher-quality STT is available through an optional cloud service.
  • No charging: The battery lasts for up to years of average use. After the end of its life, send your ring back to us for recycling.
  • On-ring storage: Recording works even if your phone is out of range. Up to 5 minutes of audio can be stored on-ring, then synced later.
  • No speaker or vibrating motor: This is an input device only. There is an RGB LED, but it’s rarely used (to save battery life and to reduce distraction).
  • Works great with Pebble or other smartwatches: After recording, the thought will appear on your watch, and you can check that it’s correct. You can ask questions like ‘What’s the weather today?’ and see the answer on your watch.
  • Raw audio playback : Very helpful if STT doesn’t work perfectly due to wind or loud background noises.
  • Actions: W hile the primary task is remembering things for you, you can also ask it to do things like ’Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion).
  • Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP. Or route the audio recordings directly to your own app or server!
  • 99+ languages: Speech to text and local LLM support over 99 languages! Naturally, the quality of each may vary.

# Future Plans

Let me be very clear - Index 01 is designed at its core to be a device that helps you remember things. We want it to be 100% reliable at its primary task. But we’re leaving the side door open for folks to customize, build new interactions and actions.

Here’s how I’m thinking about it - a single click-hold + voice input will be routed to the primary memory processing path. Double-click-hold + voice input would be routed to a more general purpose voice agent (think ChatGPT with web search). Responses from the agent would be presented on Pebble (eg ‘What’s the weather tomorrow?’, ‘When’s the next northbound Caltrain?’) or other smartwatches (as a notification). Maybe this could even be an input for something like ChatGPT Voice Mode, enabling you to hear the AI response from your earbuds.

The built in actions, set reminder, create note, alarms, etc, are actually MCPs - basically mini apps that AI agents know how to operate. They run locally in WASM within the Pebble mobile app (no cloud MCP server required). Basically any MCP server can be used with the system, so intrepid folks may have fun adding various actions like Beeper, Google Calendar, weather, etc that already offer MCPs.

Not everything will be available at launch, but this is the direction we are working towards. There will be 3 ways to customize your Index 01:

  1. Trigger actions via button clicks - configure a single or double click to do things like take a photo, control your Home Assistant smart home, Tasker function, unlock your car. This will work better on Android since iOS Shortcuts doesn’t have an open API.
  2. Trigger actions via voice input - write an MCP to do….basically anything? This is pretty open ended.
  3. Route your voice recordings and/or transcriptions to your own webhook - or skip our AI processing entirely and send every recording to your own app or webapp.

### FAQ

How does it work?

People usually wear it on the index finger. Inside the ring is a button, a microphone, a Bluetooth chip, memory, and a battery that lasts for years. Click the button with your thumb, talk into the mic, and it records to internal memory. When your phone is in range, the recording is streamed to the Pebble app. It’s converted to text on-device, then processed by an on-device large language model (LLM) which selects an action to take (create note, add to reminders, etc).

When do I pick my size?

You’ll be able to pick your ring size and color after placing a pre-order. If you have a 3D printer, you can print our CAD designs to try on. We’re also planning a sizing kit. You can view the measurements of the inner diameter of each ring size.

How long does the battery last?

Roughly 12 to 15 hours of recording. On average, I use it 10-20 times per day to record 3-6 second thoughts. That’s up to 2 years of usage.

Is it secure and private?

Yes, extremely. The connection between ring and phone is encrypted. Recordings are processed locally on your phone in the open-source Pebble app. The app works offline (no internet connection) and does not require a cloud service. An optional cloud storage system for backing up recordings is available. Our plan is for this to be optionally encrypted, but we haven’t built it yet.

Is a paid subscription required?

No.

What kind of battery is inside?

Index 01 uses silver-oxide batteries.

Why can’t it be recharged?

We considered this but decided not to for several reasons:

  1. You’d probably lose the charger before the battery runs out!
  2. Adding charge circuitry and including a charger would make the product larger and more expensive.
  3. You send it back to us to recycle.

Wait, it’s single use?

Yes. We know this sounds a bit odd, but in this particular circumstance we believe it’s the best solution to the given set of constraints. Other smart rings like Oura cost $250+ and need to be charged every few days. We didn’t want to build a device like that. Before the battery runs out, the Pebble app notifies and asks if you’d like to order another ring.

Is it always listening?

No. It only records while the button is pressed. It’s not designed to record your whole life, or meetings.

What if the speech-to-text processing misses a word or something?

You can always listen to the each recording in the app.

Why no touchpad?

We experimented with a touchpad, but found it too easy to accidentally swipe and press. Also, nothing beats the feedback of a real gosh darn pressable button.

Is there a speaker or vibrating motor?

No. The button has a great click-feel to indicate when you are pressing.

Does it do health tracking like Oura?

Nope

How durable and water-resistant is it?

It’s primarily made from stainless steel 316, with a liquid silicone rubber (LSR) button. It’s water-resistant to 1 meter. You can wash your hands, do dishes, and shower with it on, but we don’t recommend swimming with it.

Does it work with iPhone and Android?

Yes

I love customizing and hacking on my devices. What could I do with Index 01?

Lots of stuff! Control things with the buttons. Route raw audio or transcribed text directly to your own app via webhook. Use MCPs (also run locally on-device! No cloud server required) to add more actions.

Is this an AI friend thingy or always-recording device?

No.

How far along is development?

We’ve been working on this in the background to watch development. It helps that our Pebble Time 2 partner factory is also building Index 01! We’re currently in the DVT stage, testing pre-production samples. We’ll start a wider alpha test in January with a lot more people. Here’s some shots from the pre-production assembly line:

Show HN: Gemini Pro 3 Hallucinates the HN Front Page 10 Years from Today

Hacker News
dosaygo-studio.github.io
2025-12-09 15:00:38
Comments...
Original Article
1.
First successful telemetry from Starship HLS-9 on the Sea of Tranquility ( spacex.com )
894 points by muskwatch 4 hours ago | hide | 312 comments
2.
A 100% Rust kernel is now upstream in Linux 7.4 ( kernel.org )
402 points by rust_evangelist 6 hours ago | hide | 156 comments
3.
Why I still write raw code instead of prompting the compiler ( nostalgic-coder.io )
128 points by oldtimer99 3 hours ago | hide | 89 comments
4.
Running LLaMA-12 7B on a contact lens with WASM ( arxiv.org )
67 points by edge_compute 2 hours ago | hide | 14 comments
5.
Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns ( algodrill.io )
243 points by persistence_is_key 5 hours ago | hide | 98 comments
6.
ITER achieves net positive energy for 20 consecutive minutes ( nature.com )
1205 points by physics_lover 12 hours ago | hide | 402 comments
7.
Restoring a 2024 Framework Laptop: A retrospective ( ifixit.com )
56 points by retro_fix 4 hours ago | hide | 22 comments
8.
Google kills Gemini Cloud Services ( killedbygoogle.com )
530 points by dang_fan 15 hours ago | hide | 330 comments
9.
Visualizing the 5th dimension with WebGPU 2.0 ( graphics-shader.net )
88 points by webgl_wizard 7 hours ago | hide | 12 comments
10.
Launch HN: Nia (YC W36) – Give context to autonomous coding agents ( trynia.ai )
112 points by founder_jane 10 hours ago | hide | 45 comments
11.
Debian 18 "Trixie" released ( debian.org )
312 points by apt_get 14 hours ago | hide | 78 comments
12.
Is it time to rewrite sudo in Zig? ( github.com )
45 points by ziggy42 3 hours ago | hide | 60 comments
13.
EU passes "Right to Human Verification" Act ( europa.eu )
670 points by policy_wonk 1 day ago | hide | 290 comments
14.
Reverse Engineering the Neuralink V4 Bluetooth Protocol ( brain-hacks.org )
220 points by cyborg_sec 8 hours ago | hide | 55 comments
15.
Post-Silicon Computing: An Intro to Photonic Circuits ( mit.edu )
99 points by lightspeed 6 hours ago | hide | 18 comments
16.
FDA approves over-the-counter CRISPR for lactose intolerance ( fda.gov )
415 points by bio_hacker 16 hours ago | hide | 211 comments
17.
SQLite 4.0 Release Notes ( sqlite.org )
800 points by drh 20 hours ago | hide | 140 comments
18.
Ask HN: How do you prevent ad-injection in AR glasses?
320 points by glasshole2 11 hours ago | hide | 102 comments
19.
Jepsen: NATS 4.2 (Still losing messages?) ( jepsen.io )
88 points by aphyr_bot 9 hours ago | hide | 33 comments
20.
Playing GTA VI on a RISC-V Cluster ( youtube.com )
45 points by tlyleung 2 hours ago | hide | 16 comments
21.
Why functional programming is the future (again) ( haskell.org )
102 points by monad_lover 7 hours ago | hide | 65 comments
22.
Microsoft Office 365 prices increase to $40/user/month ( officewatch.com )
900 points by taubek 1 day ago | hide | 600 comments
23.
Emulating Windows 10 in the browser ( bellard.org )
341 points by qemu_fan 19 hours ago | hide | 50 comments
24.
Let's put Tailscale on a SpaceX Starlink Dish ( tailscale.com )
250 points by net_hacker 20 hours ago | hide | 45 comments
25.
Manual: Deep Fakes detection for Seniors ( aarp.org )
122 points by concerned_grandson 21 hours ago | hide | 77 comments
26.
IBM to acquire OpenAI (Rumor) ( bloomberg.com )
120 points by stock_watcher 1 day ago | hide | 338 comments
27.
The unexpected return of server-side rendering ( htmx.org )
147 points by bikenaga 19 hours ago | hide | 48 comments
28.
How to build a Faraday Cage for your bedroom ( privacy-first.com )
267 points by tinfoil_hat 22 hours ago | hide | 49 comments
29.
AI progress is stalling. Human equivalence was a mirage ( garymarcus.com )
485 points by skeptic_ai 14 hours ago | hide | 416 comments
30.
Show HN: A text editor that doesn't use AI ( github.com )
270 points by pure_coder 22 hours ago | hide | 105 comments

Fighting the Client Spaghetti Monster with Rust Traits

Lobsters
www.gnunicorn.org
2025-12-09 14:59:01
Comments...
Original Article

tl;dr : In Rust, “trait composition” are a neat way to keep code, where a lot of components come together and need to be piped up, clean and avoid spaghettification.

Introduction

A major part of my almost two decade long career in programming has been spent working on “SDKs” in Rust. By which I mean building and maintaining complex systems as libraries used by other developers to implement applications on top of. I did this back at Immmer (now defunct), for Parity with Substrate Core/Client as well as its inner on-chain application SDK to the matrix-rust-sdk and last but not least at Acter for the Acter App and then the Zoe ( relay ) system.

For a while, but especially during latest iteration, I have been wondering about that highest layer architecture. How to design that client, where all these subcomponents are piped together. How to design it in a way that stays flexible for yourself as well as others, yet robust and ideally testable. How to avoid spaghettification of the client, even if the underlying components are complex trait-based systems themselves.

As we have to cover a lot of surface area itself, I will not be discussing trait themselves too much – check the corresponding chapter in the excellent Rust book , if you are looking for that – but assume you have an understanding of traits, trait bounds and have implemented them in Rust. I will throw around some almost-real code and examples without asking and expect the reader to be able to parse and understand them without much help. As I want to focus on the higher level “how do we use this”-architecture perspective.

Traits in SDKs

As with any big task, the best way to tackle it is by splitting them into smaller, manageable tasks and implement these one by one. The same is true for building up large SDKs. Often times they contain various components, like a storage layer; network or communication components; some internal state machine for the actual domain specific logic; and maybe some developer-front facing API or even UI components. To make implementing more manageable, it is common place to split them up into the separate independent components, sometimes even as separate crates, and provide an outer interface.

In the SDK world you often find that these components internally need to be plugable themselves though. Like a storage component might be implemented with an embedded SQLite for mobile Apps, with some SQL-backend-service or NoSQL-Database on the Server and with IndexDB in the Browser (with Wasm). Generally, the outer composed system doesn’t really have to care which of these is being used and thus it can be up to that component to define that. A common way to provide this abstraction is by defining a trait for that lowest layer and have these various specific parts implement them. Then the higher layer and also the layers on top can focus on their specific side of things.

This also nicely allows for these implementations that come with their own implementations to be only pulled. Or only compile for the targets that actually use them, as well as introduce new implementations via feature-flags gradually into production. It’s a pretty neat way of organizing the code. In the Matrix SDK we have that layer for implementing storage for example, and though not strictly because of the trait, the SDK even provides a macro to generate the entire test suite against your custom implementation that you can use.

To the mock

Having these traits brings in another nice benefit: Mocking. As the higher level components might have their own logic (like caching or ordering or something) testing often requires to set up the lower level component(s) as well. If instead, you defined that interface in a trait, you can implement various Mock-types to test a range of scenarios for your functions and focus on this specific logic. What sounds tedious at first becomes a breeze with the help of crates like mockall . It’s a lot easier and often faster than setting up that lower level layer just to test that the component pulls the objects from the store and returns them sorted regardless of the underlying order.

Middleware-ing

Similarly, by having the traits define the interfaces, you can add functionally nicely in a middleware-kinda fashion similar to what is done many web servers. Think of a caching layer on top of the database as an example. That caching layer can wrap anything implementing the trait while also implementing the trait itself. That way you can implement a LRU cache or something, regardless of the underlying storage types. As the interface is just the same trait again, you can mock the lower layer, ensuring a good test coverage on exactly what this layer does. Further you can just plug this “middleware” into the higher level layer without any further changes. This is how we implemented a storage layer for the Rust SDK that splits off media storage (before that was added to the SDK itself) and keeps them at different path (in the mobile’s “cache” directory), for example while passing along everything else to whatever inner database system was being used otherwise (e.g., SQLite).

But specific, sometimes

Now, for the traits you only want to expose the common interface of course. But specific implementation sometimes still have APIs to fine tune or configure certain things - like the path for the sqlite database. You don’t want to put these on the traits as they are implementation specific and pointless for other implementations. But as traits are implemented on specific types, your concrete types can still add these helper functions and as the higher level API / SDK you often just use feature-flags to then expose them or not.

Composing over many traits

Now that you understand the complexity and usage of these subcomponents, think about how you tie them all together in the Client . This needs to connect these components, move messages from one component to another, for e.g. to get that messages that just came in from the network to the internal state machine. And a results from the state machine which triggers the storage layer to persist some of these changes. Of course you want the client to be as flexible over the specific implementations as possible – most of that higher level code doesn’t really differ whether the message comes from LoRa, over QUIC or libP2P. It doesn’t matter to the client whether it will be stored in an SQlite database or IndexDB either.

But at times you have interdependencies, so the Rust compiler need to make sure that the type that the network layer message returns is the one that the state machine accepts. This is where things often spaghettify.

At the beginning that feels reasonable, but over time it grows, and the more things are pluggable, the more generics you need to add. The client needs one generic, then another, then another… Moving from single letter to entire words, running out of words. Sooner than you think it becomes incomprehensible to follow. Not even mentioning that ever increasing tree of trait bounds you have to keep around everywhere you expose that client. Which is your main external API surface area, so you expose it a lot . Brave are those, who then need to add another bound (like Send ) to any of the lower traits…

“There must be a better way”, you think to yourself …

The three paths of the enlightenment

As always, you have a few options with its various benefits and trade offs to manage this nicer. You can Box<dyn Trait> it, use type aliases or compose a Trait with associated types. Let’s look at them one by one, in order of increasing complexity.

Type alias

The first thing that probably comes to mind, is alias some of the types definitions to make it a bit cleaner. So you’d still have some components that are generic of some sub traits struct GenericStateMachine<S: StateT, M: MessageT> that implements most of the concrete logic, but then for the production environment you have an alias type NativeClientStateMachine = GenericStateMachine<NativeState, TcpMessage>; that you could use.

Depending how you organize your code, the final client could really end up being a type NativeTcpClient = GenericClient<NativeClientStateMachine, NativeClientStorage, TcpProtocol>; itself. And you could even have a builder that depending on the target returns one or the other type, but both have the same API implemented via the traits.

impl Builder {
    #[cfg(target_arch = "wasm")]
    pub fn build() -> Result<WasmClient>{
        //
    }

    #[cfg(not(target_arch = "wasm"))]
    pub fn build() -> Result<NativeTcpClient>{
        //
    }
}

impl GenericClient<StateMachine, Storage, Protocol> {
    pub fn state_machine(&self) -> &StateMachine {
        //
    }

    pub fn storage(&self) -> &Storage {
        //
    }
}

Giving you all the benefits of having the concrete types, including access to the actual types, so the consumers code could even do implementation specific calls and its compile would fail if they tried to do that against a type that doesn’t implement those (e.g. because they picked a different target arch). Of course this only works as long as the compiler doesn’t force you to specify which exact type you are expecting but can still infer that itself.

However, you end up with rather lengthy type alias lists you need to manage, especially if you do the wrapping of middlewares I described before, which can be hard to parse and follow, just check this ZoeClientAppManager , which itself wraps a bunch of aliases .

pub type ZoeClientStorage = SqliteMessageStorage;
pub type ZoeClientSessionManager = SessionManager<ZoeClientStorage, ZoeClientMessageManager>;
pub type ZoeClientGroupManager = GroupManager<ZoeClientMessageManager>;
pub type ZoeClientAppManager =
    AppManager<ZoeClientMessageManager, ZoeClientGroupManager, ZoeClientStorage>;
pub type ZoeClientMessageManager = MultiRelayMessageManager<ZoeClientStorage>;
pub type ZoeClientBlobService = MultiRelayBlobService<ZoeClientStorage>;
pub type ZoeClientFileStorage = FileStorage<ZoeClientBlobService>;

Navigating this tree isn’t easy. Especially when debugging you can easily end up at the wrong layer and wonder why your changes aren’t showing up.

dyn Trait s

A common idea that might come to mind is to wrap the specific implementation in a new type that holds it internally in a dyn Trait , if the trait can be made dyn compatible (formerly known as “object safety”). In practice the type most likely must be wrapped in either Box, Arc or similar - if that is what is happening already anyways then this might not be a problem. If dynamic dispatching is not too much of an overhead, this could be a viable solution.

This is exactly how the Matrix Rust SDK implements the storage layer: by wrapping the specific implementation into a Arc<dyn StateStore> and then exposing a StateStore interface without any generics.

But dyn s come with another drawback: the compiler forgets all notion of the concrete type. While this can be cheaper in terms of code size (as generic functions aren’t repeated for each type), it also means that our specific type “is gone”. Any other methods that this type implements outside of the trait become inaccessible. In the Matrix SDK for storage, that seems to be acceptable, as the only implementations specific tuning happens in the builder setup before it is passed to the StateStore .

But something as simple as getting implementation-specific configuration parameters returned from that type at runtime is now impossible, even if the type in question implemented it and it can be asserted that the type is the one.

Trait Composition

If dynamic dispatching isn’t feasible or the specific types needs to still be available, that alias list grows too long and becomes to tedious to update, you might come up with: a trait combining all the types – I call them composing trait. Rather than having a generic client with an increasingly growing list of generics, you define a trait that defines the specific types via associated types. This is what we have been doing in the Parity SDK and on-chain wasm state machine.

The idea is to create a new trait Configuration that defines all the requirements as associated types and have a client only reference that trait now. It can still return aliased or sub-types that are generic, but are then for that specific configuration. Like this:

pub trait Configuration {
    type Storage: StorageC;
    type Network: NetworkC;
    type StateMachine: StateMachineC;
}

impl<C> Client<C: Configuration> {
    pub fn state_machine(&self) -> &GenericStateMachine<C::StateMachineC> {
        //
    }
    pub fn storage(&self) -> &GenericStorage<C::StorageC> {
        //
    }
}

Unfortunately, in reality this is rarely as clean. Often you find yourself needing to define the interdependencies as well. For example: the network needs to give you a specific MessageT that the state machine also actually understands. Even if you use a trait here, the compiler will enforce that you use the same type. As a result, you end up with even very low-level trait definitions popping up on your highest level configuration so that you can cross reference them via the associated types:

trait MessageT: Sized {}
trait StorageC {
    type Message: MessageT;

    fn store(&self, message: &Self::Message) -> bool;
}
trait NetworkC {
    type Message: MessageT;

    fn next_message(&self) -> Option<Self::Message>;
}
trait StateMachineC {
    type Message: MessageT;
    type Storage: StorageC<Message = Self::Message>;

    fn process(self, message: &Self::Message);
}

trait Configuration {
    type Message: MessageT;
    type Storage: StorageC<Message = Self::Message>;
    type Network: NetworkC<Message = Self::Message>;
    type StateMachine: StateMachineC<Storage = Self::Storage, Message = Self::Message>;

    fn network(&self) -> &GenericNetwork<Self::Network>;
    fn storage(&self) -> &GenericStorage<Self::Storage>;
    fn state_machine(&self) -> &GenericStateMachine<Self::StateMachine>;
}

Nice, and clean, but you can already see how it will become more complex when these traits grow in complexity. In particular when you have to do changes to some of them, it ripples through the entire system quickly with rather hairy and complex bounds that are failing in very verbose error messages. Let’s just add an ErrorT type that our client might yield, when any of the inner yield an error. So the client is meant to wrap all the inner types. We add

trait ErrorT {}

trait StorageC {
    type Message: MessageT;
    type Error: ErrorT;
//.. to all types
}

// and on the config:
//
//
trait Configuration {
    // ...

    // gee, this is verbose...
    type Error: ErrorT +
        From<<Self::Storage as StorageC>::Error> +
        From<<Self::StateMachine as StateMachineC>::Error> +
        From<<Self::Network as NetworkC>::Error>;
}

It’s a bit verbose, but reasonable overall. It becomes more tricky when you actually try to implement these types as you need to make sure all the types also match up correctly. That way we are able to reduce the generics on client from many to just one. Nice. But dragging around this massive Configuration is a pain, especially for the mock-test-ability as we described before, as we have to mock all the associated types, creating a lot of glue code.

So instead, what I end up doing is have anything with actual logic still be referring to the generics directly, so you can mock and test these specific ones, and have the final Client<C: Configuration> just be a holder that then passes along to the specific internal type with the associated types passed in as generics.

In practice it can become even more tricky if you have some of these configuration on several layers. Like in the Parity Substrate Codebase , to allow all clients to build on reusable CLI tooling there is a Service that can construct your client. That service requires a Configuration for Network and alike, but only a subset of what a Full Node needs and as result, that second needs to be a super set of the first. But that is a really advanced scenario, and if you have any good ideas to improve that situation, I am all ears.

Conclusion: Combined Composition

As so often, enlightenment isn’t picking one solution but combining wisely.

What you probably end up doing is a combination of these compositions types. Like in the Rust Matrix SDK, where in a lower level, the plugable storage is then held via a dyn Trait , while on a higher level, you might compose a client with an “trait composition” that allows any other (rust) developer to plug and replace any of the components as they please, including yourself for platform or target specific implementations.

By keeping any actual logic in the separate components with specific traits for easy mocked testing and using the “client” merely as the place were all these pipes come and plug together, you can rely on the compilers type checks as a means to ensure the correctness of the types being piped, while you have the mock tests for all the actual logic. And integration tests should cover the end-to-end functionality of the client regardless.

To wrap things up nicely, you can hide that Client<C> inside a type alias that itself is held by a struct FfiClient(NativeClient); on which you expose a completely typed no-generics rust-external API. Put on a bow and ship it :) .


Credits : Photo taken by Gabriel (who is available for hire) and published on unsplash.com under a free license

Firefox 146 released

Linux Weekly News
lwn.net
2025-12-09 14:55:47
Version 146.0 of the Firefox web browser has been released. One feature of particular interest to Linux users is that Firefox now natively supports fractional scaled displays on Wayland. Firefox Labs has also been made available to all users even if they opt out of telemetry or participating in stud...
Original Article

Version 146.0 of the Firefox web browser has been released. One feature of particular interest to Linux users is that Firefox now natively supports fractional scaled displays on Wayland. Firefox Labs has also been made available to all users even if they opt out of telemetry or participating in studies. " This means more experimental features are now available to more people. "

This release also adds support for Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM) for WebRTC. ML-KEM is " believed to be secure against attackers with large quantum computers ". See the release notes for all changes.



Engineers: Stop trying to win other people's game

Hacker News
www.anthonyputignano.com
2025-12-09 14:53:13
Comments...
Original Article

Last week, I wrote about a category of softwares engineer companies should be seeking : masters of context — the people who know when to slam the gas, when to tap the brakes, and how to shift their entire approach without being asked.

The response to that post was significant. I received quite a few comments proclaiming how rare it was to find an engineer that fit the bill.

That’s fair!

But it’s not because only a tiny sliver of engineers are capable of working this way. They’re rare because very few engineers are ever taught to optimize for these skills, and even fewer companies reward them during the initial hiring phase. So the market ends up under-producing exactly the kind of talent it desires.

This post is for the engineers who want to be in that “rare” bucket.

Think about your career along two simple axes:

  1. How you work

    • One-speed executor: Same energy everywhere, vs.

    • Master of context: Being willing to change gears

  2. What you work on (the skill frontier)

    • Established terrain: Mature, saturated technologies, vs.

    • Western fronts: Domains where the rules are still being written

While these axes describe the mechanics of your work, there is also an operating system underneath : product thinking and customer centricity. This operating system determines whether those mechanics actually translate into meaningful outcomes.

The engineers who advance fastest today live in the top-right corner of that map:

  • They deliberately choose frontier domains; they work on the right stuff.

  • They’re masters of context in how they work, and guided by a clear understanding of customer outcomes.

That combination is what I call the Western Front Innovator .

Today’s post is about how engineers struggling to progress professionally can intentionally steer their careers toward that top-right corner.

If as part of your journey, you find yourself asking questions such as:

“How can I progress with learning React?”

or

“How can I become an expert with Kubernetes?”

Stop right there!

Swarms of others have been developing expertise with technologies that emerged last decade for… at least a decade. It’s already their superpower. It’s unlikely to become yours, too.

When you chase mature stacks as your primary differentiator, you’re signing up to compete with people who have a massive head start. You’re playing their game on their field, by their rules.

This is no way to become “rare.”

Ask yourself:

“What is emerging right now ? Where are the rules still being written such that nobody has an insurmountable head start?”

Today, that’s a few areas, including but not limited to AI engineering – specifically the intersection of data pipelines, backend systems, and LLM-driven product development. I’ll focus on this example.

Now, let’s be clear. Despite what many job requirements and LinkedIn titles would have you believe, there’s no such thing as an “AI Engineer” in any deeply meaningful sense. There simply can’t be. A junior engineer who spends six months learning this stuff today is approximately as “experienced” as almost everyone else on the market (assuming they understand the CompSci fundamentals that underpin it).

In other words, being an AI Engineer doesn’t mean having a wealth of specialized experience. How could it, possibly?

It means being hungry to experiment . Quickly. And consistently. It means you’re willing to live on a moving frontier where the docs are incomplete, patterns are still solidifying, and nobody can pretend to have a decade of authority.

This is the first half of the Western Front Innovator: you choose to live on the frontier.

This is the area where the first post went deep.

Being a master of context boils down to a simple principle: You adjust your engineering approach based on the stakes and the outcome you’re trying to achieve, not your habits.

This isn’t a separate “step” or a bonus skill sitting off to the side of the model. It’s the operating system that makes both axes work.

Without product thinking and customer centricity:

  • Context switching turns into over‑engineering or reckless hacking.

  • Frontier work turns into hype‑chasing.

With them:

  • Context switching becomes deliberate: you know when speed matters and when reliability matters because you understand the customer outcome you’re aiming for.

  • Frontier work becomes meaningful: you’re not just playing with new tools — you’re using them to solve real customer problems in ways that weren’t possible before.

This is why Western Front Innovators behave differently once they reach a frontier domain. They:

  • Start backwards from customer outcomes , not just stories and tasks.

  • Ask, “What is the actual job‑to‑be‑done here?”

  • Push on why a feature matters and what success should look like.

  • Are willing to reshape the solution when the outcome demands it.

Now mix that mindset with frontier tech and the whole picture changes:

  • Instead of saying, “Give me tickets,” they say, “If our customers struggle with X, there’s probably a way to combine this new AI capability, this data we already have, and this workflow to solve it in a way that didn’t exist a year ago.”

These engineers don’t just ship features. They ship novel outcomes . And those outcomes get noticed fast.

Unfortunately, you may find yourself saying:

“I can’t find opportunities that give me the space to do what you’re suggesting.”

Make your own opportunities. Use some downtime to wow your colleagues with a new direction. Work on side projects and/or start your own freelance to build up a portfolio. Do absolutely anything but blame your job or the market. Ultimately only you are responsible for ensuring you grow the way you want. Remember that.

Also, good news…

Historically, companies haven’t hired nearly enough Western Front Innovators. They optimized for narrow speed (ship tickets) or narrow craftsmanship (polish small, stable areas) rather than people who could steer and adapt.

AI-assisted development is already changing the landscape. As the raw mechanics of coding get easier, the premium is quickly moving toward:

  • Deciding what to build.

  • Deciding how fast to move.

  • Deciding where new tools can reshape the problem altogether.

In this world, Western Front Innovators aren’t only nice to have on a team. They’re absolutely critical . And this means companies will soon have no choice but to begin more purposefully seeking them and fostering their growth.

If you’re a software engineer looking for an edge, don’t just collect tech buzzwords and hope that translates into some vague idea of “senior.”

Design for the top-right:

  • Avoid building your whole identity around stacks that are already saturated.

  • Move closer to frontiers where experience is in short supply.

  • Lean hard into customer centricity and product thinking.

  • Practice context switching on purpose: prototype here, craftsmanship there, and be explicit about why.

There always has been inherent demand for engineers who can do this (even if job postings don’t overtly advertise it). And moving forward, I believe this inherent demand will quickly turn explicit.

So in a world filled with engineers sprinting toward other people’s superpowers, opt out. Create your own. Be a Western Front Innovator.

Discussion about this post

Ready for more?

Zig asserts are not C asserts

Lobsters
cryptocode.github.io
2025-12-09 14:52:54
Comments...
Original Article

I recently came across a piece of code in a Ziggit.dev post that gave me pause:

pub fn putOne(q: *@This(), io: Io, item: Elem) Cancelable!void {
    assert(try q.put(io, &.{item}, 1) == 1);
}
pub fn getOne(q: *@This(), io: Io) Cancelable!Elem {
    var buf: [1]Elem = undefined;
    assert(try q.get(io, &buf, 1) == 1);
    return buf[0];
}

…which led me to ask the following:

Just a quick side quest: Doesn’t the assert here risk put and get calls being optimized away? If not I think I might have misunderstood std.debug.assert’s doc comment and could use some education.

Since assert is a regular function, the argument is evaluated (and is indeed in debug/safe builds), but since the expression is assumed to be true (otherwise unreachable) it seems like the whole expression is allowed to be removed

Is there a difference whether the expression is fallible or not, or is deemed to have side effects?

Fortunately, core Zig team members frequently chimes in with their expertise on Ziggit and that was the case here as well.

The short answer to my question is: no, the put and get calls will not get optimized away. We’ll see why in a bit.

std.debug.assert and unreachable

If you’ve ever hit an assertion in Zig, then you have also looked at the implementation of std.debug.assert since it appears in the panic trace:

thread 33583038 panic: reached unreachable code

lib/std/debug.zig:550:14: 0x10495bc93 in assert (sideeffects)

if (!ok) unreachable; // assertion failure

That’s all assert does: if (!ok) unreachable;

  • If unreachable is hit in safe modes, then… well, you’ve seen the panic trace. Very helpful.

  • In optimizing modes, unreachable becomes a promise that control flow will not reach this point at all. Also very helpful: faster code!

Here’s the doc comment on assert that helped myself and some others get confused, despite the comment being 100% verifiably correct:

In ReleaseFast and ReleaseSmall modes, calls to this function are optimized away, and in fact the optimizer is able to use the assertion in its heuristics.

On closer inspection, this is just what the language reference entry on unreachable promise us. No more, no less.

This is very different from C’s assert which can nuke the whole thing through macros and preprocessor directives, depending on whether NDEBUG is set by the build system. It’s a similar story in many other languages - they have special constructs for assertions.

In Zig, std.debug.assert is a plain old function for which no special treatment is given. The idea that if (!ok) unreachable; somehow wires up the optimizer to always delete “the whole thing” in relase builds is wrong.

Does this mean asserts can be expensive even in ReleaseFast mode?

Yes, because while the call to assert is gone, the LLVM optimizer that’s supposed to remove dead code isn’t always able to do so. Simple expressions like data.len > 0 will almost certainly be optimized out, but it’s less clear for anything non-trivial.

I shared an example in the Ziggit thread where dead code removal does not occur. Here’s an improved version by TibboddiT :

const std = @import("std");

fn check(val: []u8) bool {
    var sum: usize = 0;
    for (0..val.len * 500_000_000) |v| {
        sum += val[v % val.len];
    }

    return sum == 6874500000000;
}

pub fn main() void {
    var prng: std.Random.DefaultPrng = .init(12);
    const rand = prng.random();

    var buf: [100]u8 = undefined;
    rand.bytes(&buf);

    std.debug.assert(check(&buf));
}

Compile and run this under ReleaseFast on Zig 0.14.x and you’ll see that the program is busy for a good while. The core team believes this to be a missed optimization in LLVM.

If profiling shows that an assertion is expensive, or you’re just not confident it will be fully elided, you can do something like this:

if (std.debug.runtime_safety) std.debug.assert(check(&buf));

…or check against build modes when that makes more sense.

When the optimizer will definitely not remove code

Now back to the original question, which is about the opposite of trying to get rid of dead code. We want to keep code.

There are many reasons why code will never be removed by a correctly implemented optimizer. One of them is the presence of side effects [1] . Another reason is that writes to memory must be observable when that memory is later read.

Basically, the optimizer’s rule is that code removal must not lead to correctness bugs.

The put call in assert(try q.put(io, &.{item}, 1) == 1); has side effects and depends on memory coherence as there’s a get call elsewhere. We’re all good.

Conclusion:

  • The assert(expr) call is nothing more than if (!expr) unreachable where unreachable:
    • yields a helpful trace in safe builds, and
    • provides the optimizer with useful information in unsafe builds
  • The optimizer will never optimize away expr if doing so would lead to correctness issues
  • The optimizer is not always able to optimize away expr even when it’s effectively dead code

I’ll round this off with some wise words from ifreund on the issue if Zig should match C’s assert behavior:

I think trying to match C’s assert is exactly what we should not do. I’ve seen many bugs caused by putting expressions with side effects inside the assert macro. Macros suck.


[1] In the Ziggit thread, Andrew Kelley shared the concrete list of side effects:

  • loading through a volatile pointer
  • storing through a volatile pointer
  • inline assembly with volatile keyword
  • atomics with volatile pointers
  • calling an extern function
  • @panic, @trap, @breakpoint
  • unreachable in safe optimization modes (equivalent to @panic)

Kaiju – General purpose 3D/2D game engine in Go and Vulkan with built in editor

Hacker News
github.com
2025-12-09 14:51:46
Comments...
Original Article

Kaiju Engine

Kaiju is a 2D/3D game engine written in Go (Golang) backed by Vulkan. The goal of the engine is to use a modern, easy, systems level programming language, with a focus on simplicity, to create a new kind of game engine.

  • 📄 2D / 🧊 3D Game Engine
  • 🪟 Windows
  • 🐧 Linux
  • 🤖 Android (NEW, support now functional)
  • 🍎 Mac (support is currently WIP)
  • 🤖👉⌨️ Local AI (LLM) interop
  • ⚠️ 🚧🏗️👷‍♂️ Work in progress, under heavy development
  • 🚚 Faster builds than other game engines
  • 🔥 Better performance than other game engines (9x faster than Unity out of the box)
  • 💾 Less memory than other engines

Join the community

Why Kaiju?

The current version of the base engine renders extremely fast, faster than most would think a garbage collected language could go. In my testing a release mode build of a game in Unity with nothing but a black background and a cube runs at about 1,600 FPS. In Kaiju, the same thing runs at around 5,400 FPS on the same machine. In fact, a complete game, with audio, custom cursors, real time PBR rendering with real time shadows, UI, and more runs at 2,712 FPS (in "debug" mode) screenshots or it didn't happen .

Why Go (golang)?

I love C, and because I love C and found out that Ken Thompson played a part in designing Go, I gave Go a chance. It has been such a joy to use and work with I decided to port my C game engine to Go. Go is a modern system-level language that allows me to write code the way I want to write code and even have the opportunity to do some crazy things if I want to (no strings attached). Also the simplicity and "just works" of writing Assembly code was a great boost to my happiness.

What's more, it's a language that other developers can easily learn and jump right into extending the engine/editor. No need for developers to re-figure out some bespoke macros or crazy templating nonsense. It's flat, easy, straight forward, and the foot-gun is hidden behind some walls, but there if you want it. Furthermore, developers can write their games in Go directly, no need for some alternative language that is different from the engine code (but we'll include Lua for modding).

What about the Garbage Collector?!

I am creating this section because I get asked about it when I mention "Go", possibly not realizing that most public game engines use a garbage collector (GC).

The GC is actually a feature I'm happy with (shocker coming from a C guy). Well, the reason is simple, if you're going to make a game engine that the public will use and needs to be stable, you need a garbage collector. Unity has C# (and possibly an internal GC as well), Unreal has a GC (and it could use a tune up if you ask me), Godot has a GC albeit their scripting language or when you use C#. It is actually very important for public engines to have a GC because people are only human and make a lot of mistakes, mistakes they'll blame on you (the engine developer) before they blame themselves.

Coincidentally, the overall design I have for the engine plays very well with the GC and last I measured, I have a net-0 heap allocation while running (may need a new review). If you don't abuse the GC, you shouldn't generally feel it, it runs concurrently as well.

I'll be the first to admit, I think the developers of Go can create a better GC than I can, and probably better than Unreal and Unity too.

⚠️ WORK IN PROGRESS ⚠️

Though the engine is production ready, the editor is not , feel free to join and contribute to its development.

For the latest updates, please join the Discord or check my Twitter/X .

Please review the Ad-Hoc editor readme

Compiling the engine

Please see the documentation on how to get started and compile the engine

Editor previews

full-project-run-cycle.mp4
in-engine-ai.mp4
working-code-binding.mp4
content-tagging.mp4
status-bar-update.mp4
adding-transformation-tools.mp4

Linux kernel version numbers

Lobsters
www.kroah.com
2025-12-09 14:47:57
Comments...
Original Article

Despite having a stable release model and cadence since December 2003, Linux kernel version numbers seem to baffle and confuse those that run across them, causing numerous groups to mistakenly make versioning statements that are flat out false. So let’s go into how this all works in detail.

This is a post in the series about the Linux kernel CVE release process:

  • Linux kernel versions, how the Linux kernel releases are numbered (this post)

“Old” kernel version scheme is no more

I’m going to ignore the “old” versioning scheme of Linux that was in place before the 2.6.0 release happened on December 17, 2003, as that model is no longer happening. It only confuses people when attempting to talk about code that is over 23 years old, and no one should be using those releases anymore, hopefully. The only thing that matters today about the releases that happened between 1991 and 2003 is that the developers have learned from their past mistakes and now are following a sane and simple release model and cadence.

Luckily even Wikipedia glosses over some of the mess that happened in those old development cycles, so the less said about them, the better. Moving on to…

The only things needed to remember about Linux kernel releases is:

  • All releases are “stable” and backwards compatible for userspace programs to all previous kernel releases.
  • Higher major and minor version numbers mean a newer release, and do not describe anything else.

All releases are stable

Once the 2.6.0 kernel was released, it was decided that the rule of kernel releases would be that every release would be “stable”. No kernel release should ever break any existing user’s code or workflow. Any regressions that happened would always be prioritized over new features, making it so that no user would ever have a reason to want to remain at an older kernel version.

This is essential when it comes to security bugs, as if all releases are stable, and will not break, then there is both no need to maintain older kernel versions as well as no risk for a user to upgrade to a new release with all bugfixes.

Higher numbers means newer

Along with every release being stable, the kernel developers at the 2.6.0 time decided to switch to a “time based release schedule”. This means that the kernel is released based on what is submitted for any specific development cycle during the 2 week merge window as described in the kernel documentation.

So with a time based releases happening on average every 10 weeks, the only way to distinguish between releases is the version number, which is incremented at each release.

Stable kernel branches

Once the kernel developers started on this “every release is stable” process, they soon realized that during the 10 week development cycle, there was a need to get bugfixes that went into that kernel into the “last” release as that is what users were relying on. To accomplish this, the goal of a “stable kernel release” happened. The stable releases would take the bugfixes that went into the current development tree, and apply them to the previous stable release and do a new release that users could then use.

The rules of what is acceptable into a stable kernel are documented with the most important rule being the first one “It or an equivalent fix must already exist in Linux mainline (upstream).”

Major.Minor.Stable

Kernel version numbers are split into 3 fields, major, minor, and stable, separated by a ‘.’ character. The minor number is incremented by Linus every release that he makes, while the major number is only incremented every few years when the minor number gets too large for people. The major.minor pair is considered the “kernel branch” number for a release, and the stable number is then incremented for every stable release on that branch.

An example makes this a bit simpler to understand. Here is how the 5.2 kernel releases happened: 5.2 kernel release

First 5.2.0 was released by Linus, and then he continued on with the 5.3 development cycle, first releasing -rc1, and then -rc2, and so on until -rc7 which was followed by a stable release, 5.3.0.

At the time 5.2.0 was released, it was branched in the Linux stable git tree by the stable kernel maintainers, and stable releases started happening, 5.2.1, 5.2.2, 5.2.3, and so on. The changes in these stable releases were all first in Linus’s tree, before they were allowed to be in a stable release, ensuring that when a user upgrades from 5.2 to 5.3, they will not have any regressions of bugfixes that might have only gone into the 5.2.stable releases.

.y terminology

Many times, kernel developers will discuss a kernel branch as being “5.4.y” with “.y” being a way to refer to the stable kernel branch for the 5.4 release. This is also how the branches are named in the Linux stable git tree , which is where the terminology came from.

Stable release branches “stop”

What is important to remember is that stable release branches are ended after a period of time. Normally they last until a few weeks after the next minor release happens, but one kernel branch a year is picked to be a “longterm” kernel release branch that will live for at least 2 years. This kernel is usually the “last” kernel release of the year, and the support cycle can be seen on the kernel.org releases page

This ability for kernel releases to continue for a short while, or many years before going end-of-life, is important to realize when attempting to track security bugs and fixes over time, as many companies get confused when trying to compare version numbers against each other. It is NOT safe to do a simple “if this version number is bigger than the previous one, then all fixes for it will be in the next release.” You have to treat each kernel “branch” as a unique tree, and not compare them against each other in order to be able to properly track changes over time. But more about that later on…

My favourite small hash table

Hacker News
www.corsix.org
2025-12-09 14:47:20
Comments...
Original Article

I'm the kind of person who thinks about the design and implementation of hash tables. One design which I find particularly cute, and I think deserves a bit more publicity, is Robin Hood open-addressing with linear probing and power-of-two table size. If you're not familiar with hash table terminology, that might look like a smorgasbord of random words, but it should become clearer as we look at some actual code.

To keep the code simple to start with, I'm going to assume:

  1. Keys are randomly-distributed 32-bit integers.
  2. Values are also 32-bit integers.
  3. If the key 0 is present, its value is not 0 .
  4. The table occupies at most 32 GiB of memory.

Each slot in the table is either empty, or holds a key and a value. The combination of properties (1) and (2) allows a key/value pair to be stored as a 64-bit integer, and property (3) means that the 64-bit value 0 can be used to represent an empty slot (some hash table designs also need a special value for representing tombstones, but this design doesn't need tombstones). Combining a key and a value into 64 bits couldn't be easier: the low 32 bits hold the key, and the high 32 bits hold the value.

The structure for the table itself needs a pointer to the array of slots, the length of said array, and the number of non-empty slots. As the length is always a power of two, it's more useful to store length - 1 instead of length , which leads to mask rather than length , and property (4) means that mask can be stored as 32 bits. As the load factor should be less than 100%, we can assume count < length , and hence count can also be 32 bits. This leads to a mundane-looking:

struct hash_table_t {
  uint64_t* slots;
  uint32_t mask;
  uint32_t count;
};

Property (1) means that we don't need to hash keys, as they're already randomly distributed. Every possible key K has a "natural position" in the slots array, which is just K & mask . If there are collisions, the slot in which a key actually ends up might be different to its natural position. The "linear probing" part of the design means that if K cannot be in its natural position, the next slot to be considered is (K + 1) & mask , and if not that slot then (K + 2) & mask , then (K + 3) & mask , and so on. This leads to the definition of a "chain": if K is some key present in the table, C K denotes the sequence of slots starting with K 's natural position and ending with K 's actual position. We have the usual property of open-addressing: none of the slots in C K are empty slots. The "Robin Hood" part of the design then imposes an additional rather interesting property: for each slot S in C K , Score(S.Index, S.Key) ≥ Score(S.Index, K) , where:

  • S.Index is the index of S in the slots array (not the index of it in C K ).
  • S.Key is the key present in slot S (i.e. the low 32 bits of slots[S.Index] ).
  • Score(Index, Key) is (Index - Key) & mask .

These properties give us the termination conditions for the lookup algorithm: for a possible key K , we look at each slot starting from K 's natural position, and either we find K , or we find an empty slot, or we find a slot with Score(S.Index, S.Key) < Score(S.Index, K) . In either of the latter two cases, K cannot have been present in the table. In the function below, Score(S.Index, K) is tracked as d . In a language with a modern type system, the result of a lookup would be Optional<Value> , but if sticking to plain C, property (3) can be used to make something similar: the 64-bit result is zero if the key is absent, and otherwise the value is in the low 32 bits of the result (which may themselves be zero, but the full 64-bit result will be non-zero). The logic is thus:

uint64_t table_lookup(hash_table_t* table, uint32_t key) {
  uint32_t mask = table->mask;
  uint64_t* slots = table->slots;
  for (uint32_t d = 0;; ++d) {
    uint32_t idx = (key + d) & mask;
    uint64_t slot = slots[idx];
    if (slot == 0) {
      return 0;
    } else if (key == (uint32_t)slot) {
      return (slot >> 32) | (slot << 32);
    } else if (((idx - (uint32_t)slot) & mask) < d) {
      return 0;
    }
  }
}

If using a rich 64-bit CPU architecture, many of the expressions in the above function are cheaper than they might initially seem:

  • slots[idx] involves zero-extending idx from 32 bits to 64, multiplying it by sizeof(uint64_t) , adding it to slots , and then loading from that address. All this is a single instruction on x86-64 or arm64.
  • key == (uint32_t)slot involves a comparison using the low 32 bits of a 64-bit register, which is a completely standard operation on x86-64 or arm64.
  • (slot >> 32) | (slot << 32) is a rotation by 32 bits, which again is a single instruction on x86-64 or arm64.

On the other hand, if using riscv64, things are less good:

  • If the Zba extension is present, sh3add.uw is a single instruction for zero-extending idx from 32 bits to 64, multiplying it by sizeof(uint64_t) , and adding it to slots . If not, each step is a separate instruction, though the zero-extension can be eliminated with a slight reformulation to encourage the compiler to fold the zero-extension onto the load of table->mask (as riscv64 usually defaults to making sign-extension free, in contrast to x86-64/arm64 which usually make zero-extension free). Regardless, the load is always its own instruction.
  • key == (uint32_t)slot hits a gap in the riscv64 ISA: it doesn't have any 32-bit comparison instructions, so this either becomes a 32-bit subtraction followed by a 64-bit comparison against zero, or promotion of both operands from 32 bits to 64 bits followed by a 64-bit comparison.
  • If the Zbb extension is present, rotations are a single instruction. If not, they're three instructions, and so it becomes almost worth reworking the slot layout to put the key in the high 32 bits and the value in the low 32 bits.

Moving on from lookup to insertion, there are various different options for what to do when the key being inserted is already present. I'm choosing to show a variant which returns the old value (in the same form as table_lookup returns) and then overwrites with the new value, though other variants are obviously possible. The logic follows the same overall structure as seen in table_lookup :

uint64_t table_set(hash_table_t* table, uint32_t key, uint32_t val) {
  uint32_t mask = table->mask;
  uint64_t* slots = table->slots;
  uint64_t kv = key + ((uint64_t)val << 32);
  for (uint32_t d = 0;; ++d) {
    uint32_t idx = ((uint32_t)kv + d) & mask;
    uint64_t slot = slots[idx];
    if (slot == 0) {
      // Inserting new value (and slot was previously empty)
      slots[idx] = kv;
      break;
    } else if ((uint32_t)kv == (uint32_t)slot) {
      // Overwriting existing value
      slots[idx] = kv;
      return (slot >> 32) | (slot << 32);
    } else {
      uint32_t d2 = (idx - (uint32_t)slot) & mask;
      if (d2 < d) {
        // Inserting new value, and moving existing slot
        slots[idx] = kv;
        table_reinsert(slots, mask, slot, d2);
        break;
      }
    }
  }
  if (++table->count * 4ull >= mask * 3ull) {
    // Expand table once we hit 75% load factor
    table_rehash(table);
  }
  return 0;
}

To avoid the load factor becoming too high, the above function will sometimes grow the table by calling this helper function:

void table_rehash(hash_table_t* table) {
  uint32_t old_mask = table->mask;
  uint32_t new_mask = old_mask * 2u + 1u;
  uint64_t* new_slots = calloc(new_mask + 1ull, sizeof(uint64_t));
  uint64_t* old_slots = table->slots;
  uint32_t idx = 0;
  do {
    uint64_t slot = old_slots[idx];
    if (slot != 0) {
      table_reinsert(new_slots, new_mask, slot, 0);
    }
  } while (idx++ != old_mask);
  table->slots = new_slots;
  table->mask = new_mask;
  free(old_slots);
}

Both of table_set and table_rehash make use of a helper function which is very similar to table_set , but doesn't need to check for overwriting an existing key and also doesn't need to update count :

void table_reinsert(uint64_t* slots, uint32_t mask, uint64_t kv, uint32_t d) {
  for (;; ++d) {
    uint32_t idx = ((uint32_t)kv + d) & mask;
    uint64_t slot = slots[idx];
    if (slot == 0) {
      slots[idx] = kv;
      break;
    } else {
      uint32_t d2 = (idx - (uint32_t)slot) & mask;
      if (d2 < d) {
        slots[idx] = kv;
        kv = slot;
        d = d2;
      }
    }
  }
}

That covers lookup and insertion, so next up is key removal. As already hinted at, this hash table design doesn't need tombstones. Instead, removing a key involves finding the slot containing that key and then shifting slots left until finding an empty slot or a slot with Score(S.Index, S.Key) == 0 . This removal strategy works due to a neat pair of emergent properties:

  • If slot S has Score(S.Index, S.Key) != 0 , it is viable for S.Key to instead be at (S.Index - 1) & mask (possibly subject to additional re-arranging to fill the gap formed by moving S.Key ).
  • If slot S has Score(S.Index, S.Key) == 0 , and S is part of some chain C K , then S is at the very start of C K . Hence it is viable to turn (S.Index - 1) & mask into an empty slot without breaking any chains.

This leads to the tombstone-free removal function, which follows the established pattern of returning either the old value or zero:

uint64_t table_remove(hash_table_t* table, uint32_t key) {
  uint32_t mask = table->mask;
  uint64_t* slots = table->slots;
  for (uint32_t d = 0;; ++d) {
    uint32_t idx = (key + d) & mask;
    uint64_t slot = slots[idx];
    if (slot == 0) {
      return 0;
    } else if (key == (uint32_t)slot) {
      uint32_t nxt = (idx + 1) & mask;
      --table->count;
      while (slots[nxt] && ((slots[nxt] ^ nxt) & mask)) {
        slots[idx] = slots[nxt];
        idx = nxt;
        nxt = (idx + 1) & mask;
      }
      slots[idx] = 0;
      return (slot >> 32) | (slot << 32);
    } else if (((idx - (uint32_t)slot) & mask) < d) {
      return 0;
    }
  }
}

The final interesting hash table operation is iterating over all keys and values, which is just an array iteration combined with filtering out zeroes:

void table_iterate(hash_table_t* table, void(*visit)(uint32_t key, uint32_t val)) {
  uint64_t* slots = table->slots;
  uint32_t mask = table->mask;
  uint32_t idx = 0;
  do {
    uint64_t slot = slots[idx];
    if (slot != 0) {
      visit((uint32_t)slot, (uint32_t)(slot >> 32));
    }
  } while (idx++ != mask);
}

That wraps up the core concepts of this hash table, so now it is time to revisit some of the initial simplifications.

If keys are 32-bit integers but are not randomly-distributed, then we just need an invertible hash function from 32 bits to 32 bits, the purpose of which is to take keys following ~any real-world pattern and emit a ~random pattern. The table_lookup , table_set , and table_remove functions gain key = hash(key) at the very start but are otherwise unmodified (noting that if the hash function is invertible, hash equality implies key equality, hence no need to explicitly check key equality), and table_iterate is modified to apply the inverse function before calling visit . If hardware CRC32 / CRC32C instructions are present (as is the case on sufficiently modern x86-64 and arm64 chips), these can be used for the task, although their inverses are annoying to compute, so perhaps not ideal if iteration is an important operation. If CRC32 isn't viable, one option out of many is:

uint32_t u32_hash(uint32_t h) {
  h ^= h >> 16;
  h *= 0x21f0aaad;
  h ^= h >> 15;
  h *= 0x735a2d97;
  h ^= h >> 15;
  return h;
}
uint32_t u32_unhash(uint32_t h) {
  h ^= h >> 15; h ^= h >> 30;
  h *= 0x97132227;
  h ^= h >> 15; h ^= h >> 30;
  h *= 0x333c4925;
  h ^= h >> 16;
  return h;
}

If keys and values are larger than 32 bits, then the design can be augmented with a separate array of key/value pairs, with the design as shown containing a 32-bit hash of the key and the array index of the key/value pair. To meet property (3) in this case, either the hash function can be chosen to never be zero, or "array index plus one" can be stored rather than "array index". It is not possible to make the hash function invertible in this case, so table_lookup , table_set , and table_remove do need extending to check for key equality after confirming hash equality. Iteration involves walking the separate array of key/value pairs rather than the hash structure, which has the added benefit of iteration order being related to insertion order rather than hash order. As another twist on this, if keys and values are variably-sized, then the design can instead be augmented with a separate array of bytes , with key/value pairs serialised somewhere in that array, and the hash structure containing a 32-bit hash of the key and the byte offset (within the array) of the key/value pair.

Of course, a design can only stretch so far. If you're after a concurrent lock-free hash table, look elsewhere. If you can rely on 128-bit SIMD instructions being present, you might instead want to group together every 16 key/value pairs, keep an 8-bit hash of each key, and rely on SIMD to perform 16 hash comparisons in parallel. If you're building hardware rather than software, it can be appealing to have multiple hash functions, each one addressing its own SRAM bank. There is no one-size-fits-all hash table, but I've found the one shown here to be good for a lot of what I do.

Two Decades of Broken Promises at Atlantic Yards

hellgate
hellgatenyc.com
2025-12-09 14:45:19
And more links for your Tuesday....
Original Article

Have you been listening to the Hell Gate Podcast? You can catch last week's episode here .

On the subterranean practice basketball court at Barclays Center on Monday night, a group of real estate developers stood in front of more than 100 Brooklynites and asked them to participate in the "reimagining phase" of an unfinished project that began more than two decades ago, and is now marred with a history of broken promises.

The meeting was the second in a series of public workshops held by the state's quasi-private economic arm, Empire State Development , with developers Cirrus Real Estate Partners and LCOR after the pair took over the Atlantic Yards project from Greenland USA in October.

For locals, there's a certain deja vu: ESD first came to them in 2003 with a plan to build a new neighborhood around a centerpiece arena. The plan would bypass the City's typical land use review process—with the state seizing some properties through eminent domain. But it would be totally worth it, because the developer would build 2,250 units of much-needed affordable housing atop the railyard. Also: Brooklyn would get a basketball team . If Jay-Z was behind it , could it really be that bad?

Despite fierce opposition to the plans at the time—including warnings that the affordable housing was an empty promise—the land got seized, the stadium got built, and 3,212 apartments went up, with fewer than half of them "affordable." Developer Greenland defaulted on nearly $350 million of loans in 2023 that forced a foreclosure auction before it could build the rest of the promised housing, shorting New Yorkers by about 900 affordable units. And despite signing a legally-binding agreement that required them to build all those units by May of 2025, New York officials opted to not hold Greenland accountable for the millions in monthly fines they were meant to pay for not doing the damn thing, citing their fear of a lawsuit .

Give us your email to read the full story

Sign up now for our free newsletters.

Sign up

Mistral Releases Devstral 2 (72.2% SWE-Bench Verified) and Vibe CLI

Hacker News
mistral.ai
2025-12-09 14:45:01
Comments...
Original Article

Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.

Devstral 2 is currently free to use via our API .

We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.

Highlights.

  1. Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.

  2. Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.

  3. Mistral Vibe CLI: Native, open-source agent in your terminal solving software engineering tasks autonomously.

  4. Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.

  5. Compatible with on-prem deployment and custom fine-tuning.

Devstral: the next generation of SOTA coding.

Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.

Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.

Devstral   Swe Bench Verified  Open Weights Vs Proprietary Models (dark) (1)

Devstral   Swe Bench Verified  Open Weights Vs Proprietary Models (light) (1)

Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.

Devstral   Swe Bench Verified Regular Performance X Modelsize (dark) (1)

Devstral   Swe Bench Verified Regular Performance X Modelsize (light) (1)

Built for production-grade workflows.

Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.

The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.

We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.

Devstral   Model Performance Comparison (dark) (1) Devstral   Model Performance Comparison (light) (1)

“Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.

“Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.

Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents.

Mistral Vibe CLI.

Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.

Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:

  • Project-aware context: Automatically scans your file structure and Git status to provide relevant context

  • Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes

  • Multi-file orchestration: Understands your entire codebase—not just the file you're editing—enabling architecture-level reasoning that can halve your PR cycle time

  • Persistent history, autocompletion, and customizable themes.

You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.

Get started.

Devstral 2 is currently offered free via our API . After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.

We’ve partnered with leading, open agent tools Kilo Code and Cline to bring Devstral 2 to where you already build.

Mistral Vibe CLI is available as an extension in Zed , so you can use it directly inside your IDE.

Recommended deployment for Devstral.

Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on build.nvidia.com . Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.

Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.

For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for Mistral Vibe CLI .

Contact us.

We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!

Share your projects, questions, or discoveries with us on X/Twitter , Discord , or GitHub .

We’re hiring!

If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to apply to join our team .

Security updates for Tuesday

Linux Weekly News
lwn.net
2025-12-09 14:15:10
Security updates have been issued by AlmaLinux (kernel, kernel-rt, and webkit2gtk3), Fedora (abrt and mingw-libpng), Mageia (apache and libpng), Oracle (abrt, go-toolset:rhel8, kernel, sssd, and webkit2gtk3), Red Hat (kernel and kernel-rt), SUSE (gimp, gnutls, kubevirt, virt-api-container, virt-cont...
Original Article
Dist. ID Release Package Date
AlmaLinux ALSA-2025:22801 8 kernel 2025-12-09
AlmaLinux ALSA-2025:22405 9 kernel 2025-12-09
AlmaLinux ALSA-2025:22800 8 kernel-rt 2025-12-09
AlmaLinux ALSA-2025:22789 8 webkit2gtk3 2025-12-09
AlmaLinux ALSA-2025:22790 9 webkit2gtk3 2025-12-09
Fedora FEDORA-2025-ed228a4b71 F41 abrt 2025-12-09
Fedora FEDORA-2025-9d0f04f316 F42 mingw-libpng 2025-12-09
Fedora FEDORA-2025-f54c75f2f9 F43 mingw-libpng 2025-12-09
Mageia MGASA-2025-0322 9 apache 2025-12-08
Mageia MGASA-2025-0323 9 libpng 2025-12-08
Oracle ELSA-2025-22760 OL8 abrt 2025-12-08
Oracle ELSA-2025-22668 OL8 go-toolset:rhel8 2025-12-08
Oracle ELSA-2025-21931 OL10 kernel 2025-12-08
Oracle ELSA-2025-22395 OL10 kernel 2025-12-08
Oracle ELSA-2025-22801 OL8 kernel 2025-12-08
Oracle ELSA-2025-19847 OL7 sssd 2025-12-08
Oracle ELSA-2025-22789 OL8 webkit2gtk3 2025-12-08
Oracle ELSA-2025-22790 OL9 webkit2gtk3 2025-12-08
Red Hat RHSA-2025:22910-01 EL7 kernel 2025-12-09
Red Hat RHSA-2025:22752-01 EL8.4 kernel 2025-12-09
Red Hat RHSA-2025:22006-01 EL8.6 kernel 2025-12-09
Red Hat RHSA-2025:22072-01 EL8.8 kernel 2025-12-09
Red Hat RHSA-2025:22066-01 EL9.0 kernel 2025-12-09
Red Hat RHSA-2025:21091-01 EL9.0 kernel 2025-12-09
Red Hat RHSA-2025:22095-01 EL9.2 kernel 2025-12-09
Red Hat RHSA-2025:21760-01 EL9.4 kernel 2025-12-09
Red Hat RHSA-2025:22914-01 EL7 kernel-rt 2025-12-09
Red Hat RHSA-2025:22087-01 EL9.0 kernel-rt 2025-12-09
Red Hat RHSA-2025:21136-01 EL9.0 kernel-rt 2025-12-09
Red Hat RHSA-2025:22124-01 EL9.2 kernel-rt 2025-12-09
Red Hat RHSA-2025:21128-01 EL9.2 kernel-rt 2025-12-09
SUSE SUSE-SU-2025:4324-1 SLE15 oS15.4 oS15.6 gimp 2025-12-09
SUSE SUSE-SU-2025:4323-1 SLE15 oS15.6 gnutls 2025-12-09
SUSE SUSE-SU-2025:4330-1 SLE15 kubevirt, virt-api-container, virt-controller-container, virt-exportproxy-container, virt-exportserver-container, virt-handler-container, virt-launcher-container, virt-libguestfs-t 2025-12-09
SUSE SUSE-SU-2025:4325-1 oS15.6 postgresql13 2025-12-09
Ubuntu USN-7412-3 16.04 18.04 gnupg2 2025-12-09
Ubuntu USN-7916-1 14.04 16.04 18.04 20.04 22.04 24.04 25.04 25.10 python-apt 2025-12-09
Ubuntu USN-7915-1 18.04 20.04 24.04 25.04 25.10 radare2 2025-12-09
Ubuntu USN-7914-1 22.04 24.04 25.04 25.10 webkit2gtk 2025-12-08

Border Patrol Agent Recorded Raid with Meta’s Ray-Ban Smart Glasses

403 Media
www.404media.co
2025-12-09 14:02:09
New videos and photos shared with 404 Media show a Border Patrol agent wearing Meta Ray-Bans glasses with the recording light clearly on. This is despite a DHS ban on officers recording with personal devices....
Original Article

On a recent immigration raid, a Border Patrol agent wore a pair of Meta’s Ray-Ban smart glasses, with the privacy light clearly on signaling he was recording the encounter, which agents are not permitted to do, according to photos and videos of the incident shared with 404 Media.

Previously when 404 Media covered Customs and Border Patrol (CBP) officials’ use of Meta’s Ray-Bans, it wasn’t clear if the officials were using them to record raids because the recording lights were not on in any of the photos seen by 404 Media. In the new material from Charlotte, North Carolina, during the recent wave of immigration enforcement, the recording light is visibly illuminated.

That is significant because CBP says it does not allow employees to use personal recording devices. CBP told 404 Media it does not have an arrangement with Meta, indicating this official was wearing personally-sourced glasses.

An activist in Charlotte provided the photos and videos to 404 Media. 404 Media granted them anonymity to protect them from retaliation.

They said the encounter happened at a busy intersection surrounded by a forest where a flower seller usually sets up shop. “By the time we showed up, the flower vendor had apparently seen Border Patrol agents approaching, and he ran into the woods,” the activist said. “They then deployed agents that were wearing these bucket hats into the woods.”

Image: 404 Media.

One of those agents was wearing the Meta Ray-Ban glasses, the material shows.

When we initially wrote about CBP agents wearing Meta Ray-Bans in Los Angeles, privacy experts told 404 Media that Department of Homeland Security (DHS) policies ban agents from wearing personal recording devices and also explicitly ban agents from taking their own recordings.

CBP’s policy on recording devices states that “no personally owned devices may be used in lieu of IDVRS [Incident Driven Video Recording Systems] to record law enforcement encounters.” It adds that “recorded data shall not be downloaded or recorded for personal use or posted onto a personally owned device.” The broader DHS policy says that “the use of personally owned [Body Worn Cameras] or other video, audio, or digital recording devices to record official law enforcement activities is prohibited.”

In a statement to 404 Media, a CBP spokesperson reaffirmed that the agency does not have any contract with Meta, and said that agents cannot use personal recording devices, but can bring “personally purchased sunglasses.” The statement did not say anything about what happens if the sunglasses happen to have a camera and microphone inside of them.

“CBP does not have an arrangement with Meta. The use of personal recording devices is not authorized; however, Border Patrol agents may wear personally purchased sunglasses,” the CBP spokesperson told 404 Media. “CBP utilize Go Pros mounted to helmets or body armor at times, as well as traditional DSLR handheld cameras.”

Meta did not respond to a request for comment.

In November, DHS launched an operation it called “Charlotte’s Web,” focused on the North Carolina city. In its announcement, DHS pointed to several criminals it said it detained. Data recently obtained by the CATO Institute showed that 73 percent of people detained by ICE since October had no criminal convictions, and five percent had a violent criminal conviction.

About the author

Joseph is an award-winning investigative journalist focused on generating impact. His work has triggered hundreds of millions of dollars worth of fines, shut down tech companies, and much more.

Joseph Cox

Fate: A modern data client for React and tRPC

Hacker News
fate.technology
2025-12-09 14:00:53
Comments...
Original Article

December 9 th 2025 by Christoph Nakazawa

I'm excited to announce the initial alpha release of fate , a modern data client for React & tRPC. fate combines view composition, normalized caching, data masking, Async React features, and tRPC's type safety.

A modern data client for React & tRPC

fate is designed to make data fetching and state management in React applications more composable, declarative, and predictable. The framework has a minimal API, no DSL, and no magic— it's just JavaScript .

GraphQL and Relay introduced several novel ideas: fragments co‑located with components, a normalized cache keyed by global identifiers, and a compiler that hoists fragments into a single network request. These innovations made it possible to build large applications where data requirements are modular and self‑contained.

Nakazawa Tech builds apps and games primarily with GraphQL and Relay. We advocate for these technologies in talks and provide templates ( server , client ) to help developers get started quickly.

However, GraphQL comes with its own type system and query language. If you are already using tRPC or another type‑safe RPC framework, it's a significant investment to adopt and implement GraphQL on the backend. This investment often prevents teams from adopting Relay on the frontend.

Many React data frameworks lack Relay's ergonomics, especially fragment composition, co-located data requirements, predictable caching, and deep integration with modern React features. Optimistic updates usually require manually managing keys and imperative data updates, which is error-prone and tedious.

fate takes the great ideas from Relay and puts them on top of tRPC. You get the best of both worlds: type safety between the client and server, and GraphQL-like ergonomics for data fetching. Using fate usually looks like this:

tsx

export const PostView = view<Post>()({
  author: UserView,
  content: true,
  id: true,
  title: true,
});

export const PostCard = ({ post: postRef }: { post: ViewRef<'Post'> }) => {
  const post = useView(PostView, postRef);

  return (
    <Card>
      <h2>{post.title}</h2>
      <p>{post.content}</p>
      <UserCard user={post.author} />
    </Card>
  );
};

Learn more about fate's core concepts or get started with a ready-made template .

Journey to fate

I was part of the original Relay and React teams at Facebook in 2013, but I didn't build Relay. While I worked on deploying the first server-side rendering engine for React and migrating Relay from React mixins to higher-order components through codemods, I honestly didn't fully grasp how far ahead everyone else on the Relay team was back then.

In the following years, Relay became the default data framework at Facebook. It was such an elegant way to handle client-side data that I had assumed it would gain widespread adoption. That didn't happen, and its backend companion GraphQL has become divisive in the web ecosystem.

I spent several years rebuilding a stack similar to the frontend stack at Facebook , using GraphQL and Relay, and even forking and rewriting abandoned Facebook libraries such as fbtee . I built Athena Crisis , a video game, and many other apps using this stack.

However, in recent months I have been exposed to the reality of React data fetching. It tends to look something like this:

tsx

const { data: post, isLoading, isError } = useFetch(…)

if (isLoading) return <Loading />;
if (isError) return <Error />;
if (!post) return <NullState />;

return <Post post={post} />;

This boilerplate is repetitive and ok, but not great. The real problems start when data changes. Mutations tend to have complex logic with detailed patches to the local cache or for handling rollbacks. For example:

tsx

mutate({
  // Complex cache patching logic or detailed cache clearing calls.
  onSuccess: (list, newItem) => {
    cache.remove('posts');
    cache.remove('post', newItem.id);
    return cache
      .get('root-posts-list')
      .map((item) => (item.id === newItem.id ? newItem : item));
  },
  // Complex rollback logic.
  onError: () => {
    cache.restore(previousCacheState);
    cache
      .get('root-posts-list')
      .map((item) => (item.id === oldItem.id ? oldItem : item));
  },
});

When your data client is an abstraction over fetch , keeping client state consistent gets hard quickly. Correctly handling mutations often requires knowing every place in your application that might fetch the same data. That often leads to defensive refetching and waterfalls down the component tree. Component trees frequently look like this:

Tree

To be clear: These libraries are great at fetching data . I know better patterns are available in most of these libraries, and advanced developers can avoid many of the downsides. Sync engines address these problems, but they're challenging to adopt and also come with trade-offs.

Still, it's too easy to get something wrong. Codebases become brittle and hard to maintain. Looking ahead to a world where AI increasingly writes more of our code and gravitates towards simple, idiomatic APIs, the problem is that request-centric fetch APIs exist at all .

Building fate

I did not want to compromise on the key insights from Relay: a normalized cache, declarative data dependencies, and view co-location. At around the same time, I watched Ricky Hanlon 's two - part React Conf talk about Async React and got excited to start building.

When fetch-based APIs cache data based on requests , people think about when to fetch data, and requests happen at every level of the component tree. This leads to boilerplate, complexity, and inconsistency. Instead, fate caches data by objects , shifts thinking to what data is required , and composes data requirements up to a single request at the root.

A typical component tree in a React application using fate might look like this:

Tree

Using fate

fate 's API is minimal: It's just JavaScript , focused on answering: "Can we make development easier?"

Views

Let me show you a basic fate code example that declares its data requirements as a "view", co-located with a component. fate requires you to explicitly "select" each field that you plan to use in your components as a "view" into your data:

tsx

import type { Post } from '@org/server/views.ts';
import { UserView } from './UserCard.tsx';
import { useView, view, ViewRef } from 'react-fate';

export const PostView = view<Post>()({
  author: UserView,
  content: true,
  id: true,
  title: true,
});

export const PostCard = ({ post: postRef }: { post: ViewRef<'Post'> }) => {
  const post = useView(PostView, postRef);

  return (
    <Card>
      <h2>{post.title}</h2>
      <p>{post.content}</p>
      <UserCard user={post.author} />
    </Card>
  );
};

A ViewRef is a reference to a concrete object of a specific type, for example a Post with id 7 . It contains the unique ID of the object, the type name and some fate -specific metadata.

fate creates and manages these references for you, and you can pass them around your components as needed to resolve them against their views.

Requests

Pass the composed views to useRequest at the root of your app, and it'll suspend and fetch data in a single request using tRPC's HTTP Batch Link .

tsx

import { useRequest } from 'react-fate';
import { PostCard, PostView } from './PostCard.tsx';

export function App() {
  const { posts } = useRequest({ posts: { list: PostView } });
  return posts.map((post) => <PostCard key={post.id} post={post} />);
}

Actions

fate does not provide hooks for mutations like traditional data fetching libraries do. Instead, all tRPC mutations are exposed as actions for use with useActionState and React Actions. They support optimistic updates out of the box.

A LikeButton component using fate Actions and an async component library might look like this:

tsx

const LikeButton = ({ post }) => {
  const fate = useFateClient();
  const [result, like] = useActionState(fate.actions.post.like, null);

  return (
    <Button
      action={() =>
        like({ input: { id: post.id }, optimistic: { likes: post.likes + 1 } })
      }
    >
      {result?.error ? 'Oops!' : 'Like'}
    </Button>
  );
};

When this action is called, fate automatically updates all views that depend on the likes field of the particular Post object. It doesn't re-render components that didn't select that field. There's no need to manually patch or invalidate cache entries. If the action fails, fate rolls back the optimistic update automatically and re-renders all affected components.

All of the above works because fate has a normalized data cache under the hood, with objects stored by their ID and type name ( __typename , e.g. Post or User ), and a tRPC backend conforming to fate 's requirements , exposing byId and list queries for each data type.

You can adopt fate incrementally in an existing tRPC codebase without changing your existing schema by adding these queries alongside your existing procedures.

Clarity

With these three code examples we covered almost the entire client API surface of fate . As a result, the mental model of using fate is dramatically simpler compared to the status quo. fate 's API is a joy to use and requires less code, boilerplate, and manual state management.

It's this clarity together with reducing the API surface that helps humans and AI write better code.

Async

Finally, by using modern Async React, the latest React DevTools features for Suspense and Component Tracks, the React Compiler and even Hot Module Reloading (HMR) for data views work out of the box.

Get Started

Get started with a ready-made template quickly

bash

npx giget@latest gh:nkzw-tech/fate-template

bash

pnpx giget@latest gh:nkzw-tech/fate-template

bash

yarn dlx giget@latest gh:nkzw-tech/fate-template

fate-template comes with a simple tRPC backend and a React frontend using fate . It features modern tools to deliver an incredibly fast development experience. Follow its README.md to get started.

Read about the Core Concepts , or jump right in and learn about Views .

You can also try a runnable demo directly in your browser:

Open in GitHub Codespaces

Future

fate is not complete yet. The library lacks core features such as garbage collection, a compiler to extract view definitions statically ahead of time, and there is too much backend boilerplate. The current implementation of fate is not tied to tRPC or Prisma, those are just the ones we are starting with. We welcome contributions and ideas to improve fate. Here are some features we'd like to add:

  • Support for Drizzle
  • Support backends other than tRPC
  • Persistent storage for offline support
  • Implement garbage collection for the cache
  • Better code generation and less type repetition
  • Support for live views and real-time updates via useLiveView and SSE

NOTE

80% of fate 's code was written by OpenAI's Codex – four versions per task, carefully curated by a human. The remaining 20% was written by @cnakazawa . You get to decide which parts are the good ones! The docs were 100% written by a human.

If you contribute to fate , we require you to disclose your use of AI tools .

Hope

Please try out fate and share your feedback. I'm excited to hear what you think and how it works for you.

If fate does not live up to its promise, I hope that Relay's and fate 's ideas will impact future data libraries for the better.

Thank you for reading.

UringMachine Benchmarks

Lobsters
github.com
2025-12-09 14:00:00
Comments...

TextKit 2 - the promised land

Lobsters
blog.krzyzanowskim.com
2025-12-09 13:59:41
Comments...
Original Article

TextKit 2 ( NSTextLayoutManager ) API was announced publicly during WWDC21, which is over 4 years ago. Before that, it was in private development for a few years and gained widespread adoption in the macOS and iOS frameworks. Promised an easier, faster, overall better API and text layout engine that replaces the aged TextKit 1 ( NSLayoutManager ) engine.

Over the years, I gained some level of expertise in TextKit 2 and macOS/iOS text processing, which resulted in STTextView - a re-implementation of TextView for macOS (AppKit) and iOS (UIKit) using TextKit 2 framework as a text layout engine, as well as public speaking praising the new, better engine we've just got to solve all the problems.

Based on my 4 years of experience working with it, I feel like I fell into a trap. It's not a silver bullet. It is arguably an improvement over TextKit 1. I want to discuss certain issues that make the TextKit 2 annoying to use (at best) and not the right tool for the job (at the worst)

The architecture & implementation

The TextKit2 architecture is good. The abstraction and the components make a lot of sense and deliver on the premise of progressive complexity. BUT the implementation is less so on par with the architecture. On the one side, NSTextContentManager provides an abstract interface for the layout engine. In practice , using anything other than NSTextContentStorage is impossible. NSTextContentStorage is one (and the only) provided implementation of the storage that works. That itself is backed by NSTextStorage , which is an abstract interface for the content storage itself - meaning all the problems I may have with NSTextStorage apply to TextKit 2 as well. In short, the UITextView/NSTextView won't work with anything other than NSTextContentStorage .

Text content manager operates on a series of NSTextElement blocks, but again, the only working implementation must inherit from NSTextParagraph , or you're in trouble (runtime assertions).

The implementation is inconsistent, and it seems intentional. TextKit2 is implemented to be used by UITextView, and that is quickly obvious. What a waste of a great idea that could have been otherwise.

Bugs in software are expected, and for TextKit 2, it's no exception. I reported many bugs myself . Some issues are fixed, while others remain unresolved. Many users received no response. Additionally, bugs occur in specific versions, and regressions are common. It is annoying to maintain compatibility, of course. From my perspective, probably the most annoying bugs are around the "extra line fragment" (the rectangle for the extra line fragment at the end of a document) and its broken layout.

Viewport is a struggle

Real struggle, though, is around the newly introduced idea of the viewport and how it works. Viewport is a tool that optimizes text layout engine work and minimizes memory footprint by focusing on the visible area, rather than the entire document, all the time. Viewport is a small portion of the visible area that "moves" as the user interacts with different parts of the document (eg, scrolling moves the viewport frame)

The viewport promise is that I don't have to ensure the layout of the whole document to get the layout of a random fragment of the document, and only layout lazily fragments that are actually important to display. To make this feature work, it requires various caching, managing intervals, invalidating ranges, and other related tasks; the TextKit 2 framework handles all of that.

Here's the stage: imagine you have a window with a text in it. Text scrolls up and down; as you scroll, the visible area displays the layout text. So, a typical text editor/viewer scenario.

TextEdit with plain text content. One of the first use of TextKit 2 on macOS.

One of the problems with viewport management is the very same thing that is the feature of the viewport. When ensuring layout only in the viewport (visible area), all other parts of the document are estimated. Specifically, the total height of the document is estimated. The estimation changes frequently as I lay out more/different parts of the document. That happens when I move the viewport while scrolling up/down. TextKit updates the value of NSTextLayoutManager.usageBoundsForTextContainer whenever the estimates change. The recipe to estimate the total height of the document is

  1. ensureLayout(for: documentRange.endLocation) that says, ensure layout of the end of the document, without forcing layout of the whole document. That operation, by definition, results in an estimated size.
  2. Resize the view to match the usageBoundsForTextContainer value. In a scrollview, this results in an update of the scroller to reflect the current document position.

The first problem I notice with this approach is that as I scroll the document and continue to lay out the changing viewport position, the value of usageBoundsForTextContainer is unstable . It frequently changes value significantly. In a scrollview, such frequent and significant changes to the height result in "juggery" of the scroller position and size

scrolling down. as document content moves up, viewport moves downward

The jiggery is super annoying and hard to accept. This is also expected, given that the height is estimated. Works as-designed:

This is correct and as-designed – The viewport-based layout in TextKit2 doesn't require that the document is fully laid out; it just needs that the part of text to be displayed on screen is laid out, and that is the way it achieves a better scrolling performance.
https://developer.apple.com/forums/thread/761364?answerId=799739022#799739022

A slightly "better" as a more stable value (from my observation), I receive when asking for the location of the last "layout element", using enumerateTextLayoutFragments and asking for the layout frame of the last, and only last fragment.

enumerateTextLayoutFragments(from: documentRange.endLocation, options: [.reverse, .ensuresLayout]) {
    layoutFragment in lastLineMaxY = layoutFragment.layoutFragmentFrame.maxY
    return false
}

That estimation is also just an estimate, and usually the value is significantly higher than the final, fully laid out document. How do I jump to the end of the document? The answer is:

  • receive an estimated (too big or too small) content height
  • update the view content size with the estimated height
  • enforce layout at the end of the document
  • move (relocate) the viewport to the end of that height (either final or estimated)

And yes, the viewport will display the end of the document, but the total height of the content is still estimated, meaning the scroller is most likely at the wrong position (it is wrong). What's the "fix" to that? The best guess is to artificially and contiusly "adjust" viewport position, meaning: the view scroll to estimated bottom of the document. Still, we ignore that fact and recognize that fact (from the context) and "fake" the viewport to display end of the document at that position, even if that position is way out of bounds of the document size. That operation (more likely, I need more adjustments like this) is fragile, and frankly, not easy to handle in a way that is not noticeable.

For a long time, I thought that I "hold it wrong" and there must be a way (maybe a private API) that addresses these problems, then I realized I'm not wrong. TextEdit app from macOS suffers from the very same issues I do in my implementations:

0:00

/ 0:06

TextEdit and TextKit 2 glitches. if you know where to push button.

0:00

/ 0:14

TextEdit and TextKit 2 glitches. if you know where to push button.

So, so

Today, I believe that's not me. The TextKit 2 API and its implementation are lacking and unexpectedly difficult to use correctly. While the design is solid, it proved challenging to apply in real-world applications. I wish I had a better or more optimistic summary of my findings, but it is what it is. I've started to think that TextKit 2 might not be the best tool for text layout, especially when it comes to text editing UI. I remain open to suggestions, and hopefully, I will find a way to use TextKit 2 without compromising user experience.

Rahm Emanuel says U.S. should follow Australia's youth social media ban

Hacker News
www.politico.com
2025-12-09 13:48:26
Comments...

Save Mumia's Eyesight: Supporters March to Prison to Demand Medical Care for Him & Aging Prisoners

Democracy Now!
www.democracynow.org
2025-12-09 13:47:49
Supporters of Mumia Abu-Jamal are on a 103-mile, 12-day march ending Tuesday in Frackville, Pennsylvania, where he is imprisoned at the Mahanoy state prison. The march ends on the same day Abu-Jamal was arrested in 1981 for the murder of Philadelphia police officer Daniel Faulkner, for which he has ...
Original Article

Image Credit: Instagram/@march4mumia (photo left)

Supporters of Mumia Abu-Jamal are on a 103-mile, 12-day march ending Tuesday in Frackville, Pennsylvania, where he is imprisoned at the Mahanoy state prison. The march ends on the same day Abu-Jamal was arrested in 1981 for the murder of Philadelphia police officer Daniel Faulkner, for which he has always maintained his innocence. One of the best-known political prisoners in the world, Abu-Jamal was an award-winning journalist and co-founder of the Philadelphia chapter of the Black Panther Party before his incarceration, and has continued to write and speak from prison. Human rights groups say he was denied a fair trial, with evidence unearthed in 2019 showing judicial bias and police and prosecutorial misconduct. Abu-Jamal is now 71 years old, and advocates say he is being denied proper medical care in prison, permanently risking his eyesight.

“We’re marching today to demand freedom for Mumia and all political prisoners,” says activist Larry Hamm.

“We ration healthcare in this country, and in particular for prisoners,” says Noelle Hanrahan, part of Abu-Jamal’s legal team, who is demanding “that Mumia get specialist care … and that he is given the treatment that he deserves.”



Guests
  • Larry Hamm

    civil rights activist, chair of the People’s Organization For Progress and one of the elders on the March for Mumia.

  • Noelle Hanrahan

    founder and producer of Prison Radio, which has been recording and distributing Mumia Abu-Jamal’s commentaries from prison since 1992. She is also an attorney on Abu-Jamal’s legal team.


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Oliver Sacks Put Himself into His Case Studies. What Was the Cost?

Hacker News
www.newyorker.com
2025-12-09 13:45:01
Comments...
Original Article

When Oliver Sacks arrived in New York City, in September, 1965, he wore a butter-colored suit that reminded him of the sun. He had just spent a romantic week in Europe travelling with a man named Jenö Vincze, and he found himself walking too fast, fizzing with happiness. “My blood is champagne,” he wrote. He kept a letter Vincze had written him in his pocket all day, feeling as if its pages were glowing. Sacks had moved to New York to work as a fellow in neuropathology at the Albert Einstein College of Medicine, in the Bronx, and a colleague observed that he was “walking on air.” Every morning, he carefully polished his shoes and shaved. He adored his bosses. “I smile like a lighthouse in all directions,” he wrote Vincze.

Sacks was thirty-two, and he told Vincze that this was his first romantic relationship that was both physical and reciprocal. He felt he was part of a “two man universe,” seeing the world for the first time—“seeing it clear, and seeing it whole.” He wandered along the shipping piers on the Hudson River, where gay men cruised, with a notebook that he treated as a diary and as an endless letter to Vincze. “To watch life with the eyes of a homosexual is the greatest thing in the world,” Vincze had once told Sacks.

Sacks’s mother, a surgeon in London, had suspected that her son was gay when he was a teen-ager. She declared that homosexuality was an “abomination,” using the phrase “filth of the bowel” and telling him that she wished he’d never been born. They didn’t speak of the subject again. Sacks had moved to America—first to California and then, after five years, to New York—because, he wrote in his journal, “I wanted a sexual and moral freedom I felt I could never have in England.” That fall, during Yom Kippur, he decided that, rather than going to synagogue to confess “to the total range of human sin,” a ritual he’d grown up with, he’d spend the night at a bar, enjoying a couple of beers. “What I suppose I am saying, Jenö, is that I now feel differently about myself, and therefore about homosexuality as a whole,” he wrote. “I am through with cringing, and apologies, and pious wishes that I might have been ‘normal.’ ” (The Oliver Sacks Foundation shared with me his correspondence and other records, as well as four decades’ worth of journals—many of which had not been read since he wrote them.)

In early October, Sacks sent two letters to Vincze, but a week passed without a reply. Sacks asked his colleagues to search their mailboxes, in case the letter had been put in the wrong slot. Within a few days, however, he had given up on innocent explanations. He began dressing sloppily. He stopped coming to work on time. He had sex with a series of men who disgusted him.

After two weeks, Vincze, who was living in Berlin, sent a letter apologizing for his delayed reply and reiterating his love. He explained that he was so preoccupied by thoughts of Sacks that he felt as if he were living in a “Klaudur,” a German word that Vincze defined as a “spiritual cell.” He seems to have misspelled Klausur , which refers to an enclosed area in a monastery, but Sacks kept using the misspelled word, becoming obsessed with it. “It ramifies in horrible associations,” he wrote Vincze. “The closing of a door. Klaudur, claustrophobia, the sense of being shut in.” Sacks had long felt as if he were living in a cell, incapable of human contact, and this word appeared to be all he needed to confirm that the condition was terminal. The meaning of the word began morphing from “spiritual cell” to “psychotic cage.”

Two people looking at their dog chewing a cigar while rolling around in poker chips and playing cards.

“He just got back from his poker game.”

Cartoon by Liana Finck

The intimacy Sacks had rejoiced in now seemed phony, a “folie à deux”—a two-person delusion. His doubts intensified for a month, then he cut off the relationship. “I must tear you out of my system, because I dare not be involved ,” he told Vincze, explaining that he barely remembered how he looked, or the sound of his voice. “I hope I will not be taken in like this again, and that—conversely—I will have the strength and clarity of mind to perceive any future such relationships as morbid at their inception, and to abort the folly of their further growth.”

Two months later, Sacks felt himself “slipping down the greased path of withdrawal, discontent, inability to make friends, inability to have sex, etc. etc. towards suicide in a New York apartment at the age of 32.” He took enormous amounts of amphetamines, to the point of hallucinating. A family friend, a psychiatrist who worked with Anna Freud, urged him to find a psychoanalyst. She wrote him that his homosexuality was “a very ‘secondary phenomenon’ ”: he was attracted to men as “a substitute for veering uncertainties of what/whom you could love other than as ‘idealizations’ of yourself.” A few weeks later, he started therapy with Leonard Shengold, a young psychiatrist who was deeply immersed in Manhattan’s psychoanalytic culture. “I think he is very good, and he has at least a very considerable local reputation,” Sacks wrote his parents, who helped to pay for the sessions, three times a week.

Sacks had elevated yet hazy ambitions at the time: he wanted to be a novelist, but he also wanted to become the “Galileo of the inward,” he told a mentor, and to write the neurological equivalent of Sigmund Freud’s “Interpretation of Dreams.” He worked in wards with chronically ill and elderly patients who had been warehoused and neglected, and his prospects within academic medicine looked dim. “Have you published anything lately?” his father wrote him, in 1968. “Or have you found yourself temperamentally incapacitated from doing so?”

When Sacks began therapy, “my initial and ultimate complaint was of fixity —a feeling of not-going ,” he wrote in his journal. He regarded Shengold as “a sort of analytic machine.” But gradually Sacks came to feel that “I love him, and need him; that I need him—and love him.” He had planned to stay in New York City only for a few years, but he kept delaying his return to England so that he could reach “a terminable point in my analysis.” Shengold, who would eventually publish ten books about psychoanalysis, wrote that therapy requires a “long period of working through”—a term he defined as the “need to repeat emotional conflicts over and over in life” until the patient has the “freedom to own what is there to be felt.”

Sacks saw Shengold for half a century. In that time, Sacks became one of the world’s most prominent neurologists and a kind of founding father of medical humanities—a discipline that coalesced in the seventies, linking healing with storytelling. But the freedom that Shengold’s analysis promised was elusive. After Vincze, Sacks did not have another relationship for forty-four years. He seemed to be doing the “working through” at a remove—again and again, his psychic conflicts were displaced onto the lives of his patients. He gave them “some of my own powers , and some of my phantasies too,” he wrote in his journal. “I write out symbolic versions of myself.”

During Sacks’s neurology internship, in San Francisco, his childhood friend Eric Korn warned him that the residents at his hospital could sense he was gay. “For God’s sake, exercise what seems to you immoderate caution,” Korn wrote, in 1961. “Compartmentalize your life. Cover your tracks. Don’t bring in the wrong sort of guests to the hospital, or sign your name and address to the wrong sort of register.” He encouraged Sacks to read “Homosexuality: Disease or Way of Life?,” a best-selling book by Edmund Bergler, who argued that homosexuality was an “illness as painful, as unpleasant and as disabling as any other serious affliction,” but one that psychoanalysis could cure. “The book is full of interest,” Korn wrote. “He claims a potential 100% ‘cures’ (a term he chooses to employ because he knows it teases) which is worth investigating perhaps.”

Freud characterized homosexuality as a relatively normal variant of human behavior, but when psychoanalysis came to the United States, in the postwar years, homophobia took on new life. The historian Dagmar Herzog has described how, in the U.S., “reinventing psychoanalysis and reinventing homophobia went hand in hand.” Faced with men who persisted in their love for other men, American analysts commonly proposed celibacy as a stopgap solution. In the historian Martin Duberman’s memoir “Cures,” he writes that his psychoanalyst instructed him to “take the veil”—live celibately—so that he could be cured of his desire for men. Duberman agreed to these terms. The best he could get, he thought, was sublimation: instead of enjoying an “affective life,” he would make “some contribution to the general culture from which I was effectively barred.” Sacks, who was closeted until he was eighty, also followed this course.

Shengold had portraits of Charles Dickens, William Shakespeare, and Sigmund Freud in his office, on the Upper East Side. Like Sacks, he came from a literary Jewish family. He seemed deeply attuned to Sacks’s creative life, which took the form of ecstatic surges of literary inspiration followed by months of sterility and depression. “Do your best to enjoy and to work—it is the power of your mind that is crucial ,” Shengold wrote when Sacks was on a visit with his family in England. Sacks wrote in his journal that he’d dreamed he overheard Shengold telling someone, “Oliver is lacking in proper self-respect; he has never really appreciated himself, or appreciated others’ appreciation of him. And yet, in his way, he is not less gifted than Auden was.” Sacks woke up flushed with embarrassment and pleasure.

Oliver Sacks on the street

Sacks in 1987. He became the modern master of the case study. “I write out symbolic versions of myself,” he wrote. Photograph by Lowell Handler

Unlike many of his contemporaries, Shengold was not a doctrinaire thinker, but he was still susceptible to psychoanalytic fashions. Reflecting on how he might have viewed living openly as a gay man at that time, Shengold’s daughter, Nina, told me, “I don’t know that was a door that Dad necessarily had wide open.” In several books and papers, Shengold, a prolific reader of Western literature, tried to understand the process by which troubled people sublimate their conflicts into art. In his 1988 book, “Halo in the Sky: Observations on Anality and Defense,” Shengold wrote about the importance of transforming “anal-sadistic drives”—he used the anus as a metaphor for primitive, dangerous impulses—into “adaptive and creative ‘making.’ ” When Sacks read the book, he wrote in his journal that it “made me feel I was ‘lost in anality’ (whatever this means).”

Before Vincze, Sacks had been in love with a man named Mel Erpelding, who once told him, Sacks wrote, that he “oozed sexuality, that it poured out through every pore, that I was alive and vibrant with sexuality (a positive-admiring way of putting things), but also that I was reeking and toxic with it.” (Erpelding, who ended up marrying a woman, never allowed his relationship with Sacks to become sexual.) In his early years of therapy, in the late sixties, Sacks resolved that he would give up both drugs and sex. It’s doubtful that Shengold encouraged his celibacy, but he may have accepted that sexual abstinence could be productive, at least for a time. Richard Isay, the first openly gay member of the American Psychoanalytic Association, said that, in the seventies, he’d “rationalized that maturity and mental health demanded the sublimation of sexual excitement in work.” Sacks told a friend, “Shengold is fond of quoting Flaubert’s words ‘the mind has its erections too.’ ”

For Sacks, writing seemed almost physiological, like sweating—an involuntary response to stimuli. He routinely filled a whole journal in two days. “Should I then put down my pen , my interminable Journal (for this is but a fragment of the journal I have kept all my life),” he asked, “and ‘start living’ instead?” The answer was almost always no. Sometimes Sacks, who would eventually publish sixteen books, wrote continuously in his journal for six hours. Even when he was driving his car, he was still writing—he set up a tape recorder so that he could keep developing his thoughts, which were regularly interrupted by traffic or a wrong turn. Driving through Manhattan one day in 1975, he reflected on the fact that his closets, stuffed with pages of writing, resembled a “grave bursting open.”

By the late sixties, Sacks had become, he wrote, “almost a monk in my asceticism and devotion to work.” He estimated that he produced a million and a half words a year. When he woke up in the middle of the night with an erection, he would cool his penis by putting it in orange jello. He told Erpelding, “I partly accept myself as a celibate and a cripple, but partly—and this is . . . the wonder of sublimation—am able to transform my erotic feelings into other sorts of love—love for my patients, my work, art, thought.” He explained, “I keep my distance from people, am always courteous, never close. For me (as perhaps for you) there is almost no room, no moral room.”

“I have some hard ‘confessing’ to do—if not in public, at least to Shengold—and myself,” Sacks wrote in his journal, in 1985. By then, he had published four books—“Migraine,” “Awakenings,” “A Leg to Stand On,” and “The Man Who Mistook His Wife for a Hat”—establishing his reputation as “our modern master of the case study,” as the Times put it. He rejected what he called “pallid, abstract knowing,” and pushed medicine to engage more deeply with patients’ interiority and how it interacted with their diseases. Medical schools began creating programs in medical humanities and “narrative medicine,” and a new belief took hold: that an ill person has lost narrative coherence, and that doctors, if they attend to their patients’ private struggles, could help them reconstruct a new story of their lives. At Harvard Medical School, for a time, students were assigned to write a “book” about a patient. Stories of illness written by physicians (and by patients) began proliferating, to the point that the medical sociologist Arthur Frank noted, “ ‘Oliver Sacks’ now designates not only a specific physician author but also a . . . genre—a distinctively recognizable form of storytelling.”

But, in his journal, Sacks wrote that “a sense of hideous criminality remains (psychologically) attached” to his work: he had given his patients “powers (starting with powers of speech) which they do not have.” Some details, he recognized, were “pure fabrications.” He tried to reassure himself that the exaggerations did not come from a shallow place, such as a desire for fame or attention. “The impulse is both ‘purer’—and deeper,” he wrote. “It is not merely or wholly a projection —nor (as I have sometimes, ingeniously-disingenuously, maintained) a mere ‘sensitization’ of what I know so well in myself. But (if you will) a sort of autobiography .” He called it “ symbolic ‘exo-graphy.’ ”

Sacks had “misstepped in this regard, many many times, in ‘Awakenings,’ ” he wrote in another journal entry, describing it as a “source of severe, long-lasting, self-recrimination.” In the book, published in 1973, he startled readers with the depth of his compassion for some eighty patients at Beth Abraham Hospital, in the Bronx, who had survived an epidemic of encephalitis lethargica, a mysterious, often fatal virus that appeared around the time of the First World War. The patients had been institutionalized for decades, in nearly catatonic states. At the time, the book was met with silence or skepticism by other neurologists—Sacks had presented his findings in a form that could not be readily replicated, or extrapolated from—but, to nonspecialists, it was a masterpiece of medical witnessing. The Guardian would name it the twelfth-best nonfiction book of all time.

Child watches parent draw their signature at a credit card machine.

“My handwriting is better than your finger-writing.”

Cartoon by William Haefeli

Sacks spent up to fifteen hours a day with his patients, one of the largest groups of post-encephalitic survivors in the world. They were “mummified,” like “living statues,” he observed. A medicine called L-dopa, which elevates the brain’s dopamine levels, was just starting to be used for Parkinson’s disease, on an experimental basis, and Sacks reasoned that his patients, whose symptoms resembled those of Parkinson’s, could benefit from the drug. In 1969, within days of giving his patients the medication, they suddenly “woke up,” their old personalities intact. Other doctors had dismissed these patients as hopeless, but Sacks had sensed that they still had life in them—a recognition that he understood was possible because he, too, felt as if he were “buried alive.”

In “Awakenings,” Sacks writes about his encounters with a man he calls Leonard L. “What’s it like being the way you are?” Sacks asks him the first time they meet. “Caged,” Leonard replies, by pointing to letters of the alphabet on a board. “Deprived. Like Rilke’s ‘Panther’ ”—a reference to a poem by Rainer Maria Rilke about a panther pacing repetitively in cramped circles “around a center / in which a mighty will stands paralyzed.”

When Sacks was struggling to write his first book, “Migraine,” he told a friend that he felt like “Rilke’s image of the caged panther, stupefied, dying, behind bars.” In a letter to Shengold, he repeated this image. When Sacks met Leonard, he jotted down elegant observations in his chart (“Quick and darting eye movements are at odds with his general petrified immobility”), but there is no mention of Leonard invoking the Rilke poem.

In the preface to “Awakenings,” Sacks acknowledges that he changed circumstantial details to protect his patients’ privacy but preserved “what is important and essential—the real and full presence of the patients themselves.” Sacks characterizes Leonard as a solitary figure even before his illness: he was “continually buried in books, and had few or no friends, and indulged in none of the sexual, social, or other activities common to boys of his age.” But, in an autobiography that Leonard wrote after taking L-dopa, he never mentions reading or writing or being alone in those years. In fact, he notes that he spent all his time with his two best friends—“We were inseparable,” he writes. He also recalls raping several people. “We placed our cousin over a chair, pulled down her pants and inserted our penises into the crack,” he writes on the third page, in the tone of an aging man reminiscing on better days. By page 10, he is describing how, when he babysat two girls, he made one of them strip and then “leaped on her. I tossed her on her belly and pulled out my penis and placed it between her buttocks and started to screw her.”

Photo of a man

Leonard Shengold, Sacks’s psychoanalyst. Photograph courtesy Nina Shengold

In “Awakenings,” Sacks has cleansed his patient’s history of sexuality. He depicts him as a man of “most unusual intelligence, cultivation, and sophistication”—the “ ‘ideal’ patient.” L-dopa may have made Leonard remember his childhood in a heightened sexual register—his niece and nephew, who visited him at the hospital until his death, in 1981, told me that the drug had made him very sexual. But they said that he had been a normal child and adolescent, not a recluse who renounced human entanglement for a life of the mind.

Sacks finished writing “Awakenings” rapidly in the weeks after burying his mother, who’d died suddenly, at the age of seventy-seven. He felt “a great open torrent—and release ,” he wrote in his journal. “It seems to be surely significant that ‘Awakenings’ finally came forth from me like a cry after the death of my own mother.” He referred to the writing of the book as his “Great Awakening,” the moment he “came out.” He doesn’t mention another event of significance: his patients had awakened during the summer of the Stonewall riots, the beginning of the gay-rights movement.

Shengold once told Sacks that he had “never met anyone less affected by gay liberation.” (Shengold supported his own son when he came out as gay, in the eighties.) Sacks agreed with the characterization. “I remain resolutely locked in my cell despite the dancing at the prison gates,” he said, in 1984.

In “Awakenings,” his patients are at first overjoyed by their freedom; then their new vitality becomes unbearable. As they continue taking L-dopa, many of them are consumed by insatiable desires. “L-DOPA is wanton, egotistical power,” Leonard says in the book. He injures his penis twice and tries to suffocate himself with a pillow. Another patient is so aroused and euphoric that she tells Sacks, “My blood is champagne”—the phrase Sacks used to describe himself when he was in love with Vincze. Sacks begins tapering his patients’ L-dopa, and taking some of them off of it completely. The book becomes a kind of drama about dosage: an examination of how much aliveness is tolerable, and at what cost. Some side effects of L-dopa, like involuntary movements and overactivity, have been well documented, but it’s hard not to wonder if “Awakenings” exaggerates the psychological fallout—Leonard becomes so unmanageable that the hospital moves him into a “punishment cell”—as if Sacks is reassuring himself that free rein of the libido cannot be sustained without grim consequence.

After “Awakenings,” Sacks intended his next book to be about his work with young people in a psychiatric ward at Bronx State Hospital who had been institutionalized since they were children. The environment reminded Sacks of a boarding school where he had been sent, between the ages of six and nine, during the Second World War. He was one of four hundred thousand children evacuated from London without their parents, and he felt abandoned. He was beaten by the headmaster and bullied by the other boys. The ward at Bronx State “exerted a sort of spell on me,” Sacks wrote in his journal, in 1974. “I lost my footing of proper sympathy and got sucked, so to speak, into an improper ‘perilous condition’ of identification to the patients.”

Shengold wrote several papers and books about a concept he called “soul murder”—a category of childhood trauma that induces “a hypnotic living-deadness, a state of existing ‘as if’ one were there.” Sacks planned to turn his work at Bronx State into a book about “ ‘SOUL MURDER’ and ‘SOUL SURVIVAL,’ ” he wrote. He was especially invested in two young men on the ward whom he thought he was curing. “The miracle-of-recovery started to occur in and through their relation to me (our relation and feelings to each other , of course),” he wrote in his journal. “We had to meet in a passionate subjectivity, a sort of collaboration or communication which transcended the Socratic relation of teacher-and-pupil.”

In a spontaneous creative burst lasting three weeks, Sacks wrote twenty-four essays about his work at Bronx State which he believed had the “beauty, the intensity, of Revelation . . . as if I was coming to know, once again, what I knew as a child, that sense of Dearness and Trust I had lost for so long.” But in the ward he sensed a “dreadful silent tension.” His colleagues didn’t understand the attention he was lavishing on his patients—he got a piano and a Ping-Pong table for them and took one patient to the botanical garden. Their suspicion, he wrote in his journal, “centred on the unbearability of my uncategorizability.” As a middle-aged man living alone—he had a huge beard and dressed eccentrically, sometimes wearing a black leather shirt—Sacks was particularly vulnerable to baseless innuendo. In April, 1974, he was fired. There had been rumors that he was molesting some of the boys.

That night, Sacks tore up his essays and then burned them. “Spite! Hate! Hateful spite!” he wrote in his journal shortly after. “And now I am empty—empty handed, empty hearted, desolate.”

The series of events was so distressing that even writing about it in his journal made Sacks feel that he was about to die. He knew that he should shrug off the false accusations as “vile idle gossip thrown by tiddlers and piddlers,” he wrote. But he couldn’t, because of “the parental accusation which I have borne—a Kafka-esque cross, guilt without crime, since my earliest days.”

The historian of medicine Henri Ellenberger observed that psychiatry owes its development to two intertwined dynamics: the neuroses of its founders—in trying to master their own conflicts, they came to new insights and forms of therapy—and the prolonged, ambiguous relationships they had with their patients. The case studies of these relationships, Ellenberger wrote, tended to have a distinct arc: psychiatrists had to unravel their patients’ “pathogenic secret,” a hidden source of hopelessness, in order to heal them.

Sacks’s early case studies also tended to revolve around secrets, but wonderful ones. Through his care, his patients realized that they had hidden gifts—for music, painting, writing—that could restore to them a sense of wholeness. The critic Anatole Broyard, recounting his cancer treatment in the Times Magazine in 1990, wrote that he longed for a charismatic, passionate physician, skilled in “empathetic witnessing.” In short, he wrote, a doctor who “would resemble Oliver Sacks.” He added, “He would see the genius of my illness.”

It speaks to the power of the fantasy of the magical healer that readers and publishers accepted Sacks’s stories as literal truth. In a letter to one of his three brothers, Marcus, Sacks enclosed a copy of “The Man Who Mistook His Wife for a Hat,” which was published in 1985, calling it a book of “fairy tales.” He explained that “these odd Narratives—half-report, half-imagined, half-science, half-fable, but with a fidelity of their own—are what I do, basically, to keep MY demons of boredom and loneliness and despair away.” He added that Marcus would likely call them “confabulations”—a phenomenon Sacks explores in a chapter about a patient who could retain memories for only a few seconds and must “ make meaning, in a desperate way, continually inventing, throwing bridges of meaning over abysses,” but the “bridges, the patches, for all their brilliance . . . cannot do service for reality.”

Sacks was startled by the success of the book, which he had dedicated to Shengold, “my own mentor and physician.” It became an international best-seller, routinely assigned in medical schools. Sacks wrote in his journal,

Guilt has been much greater since ‘Hat’ because of (among other things)

My lies,

falsification

He pondered the phrase “art is the lie that tells the truth,” often attributed to Picasso, but he seemed unconvinced. “I think I have to thrash this out with Shengold—it is killing me, soul-killing me,” he wrote. “My ‘cast of characters’ (for this is what they become) take on an almost Dickensian quality.”

Sacks once told a reporter that he hoped to be remembered as someone who “bore witness”—a term often used within medicine to describe the act of accompanying patients in their most vulnerable moments, rather than turning away. To bear witness is to recognize and respond to suffering that would otherwise go unseen. But perhaps bearing witness is incompatible with writing a story about it. In his journal, after a session with a patient with Tourette’s syndrome, Sacks describes the miracle of being “enabled to ‘feel’—that is, to imagine, with all the powers of my head and heart—how it felt to be another human being.” Empathy tends to be held up as a moral end point, as if it exists as its own little island of good work. And yet it is part of a longer transaction, and it is, fundamentally, a projection. A writer who imagines what it’s like to exist as another person must then translate that into his own idiom—a process that Sacks makes particularly literal.

“I’ll tell you what you are saying,” Sacks told a woman with an I.Q. of around 60 whose grandmother had just died. “You want to go down below and join your dead grandparents down in the Kingdom of Death.” In the conversation, which Sacks recorded, the patient becomes more expressive under the rare glow of her doctor’s sustained attention, and it’s clear that she is fond of him. But he is so excited about her words (“One feels that she is voicing universal symbols,” he says in a recording, “symbols which are infinite in meaning”) that he usurps her experience.

“I know, in a way, you don’t feel like living,” Sacks tells her, in another recorded session. “Part of one feels dead inside, I know, I know that. . . . One feels that one wants to die, one wants to end it, and what’s the use of going on?”

“I don’t mean it in that way,” she responds.

“I know, but you do, partly,” Sacks tells her. “I know you have been lonely all your life.”

Tourists take photos next to giant casserole dish.

Cartoon by Michael Maslin

The woman’s story is told, with details altered, in a chapter in “Hat” titled “Rebecca.” In the essay, Rebecca is transformed by grief for her grandmother. She reminds Sacks of Chekhov’s Nina, in “The Seagull,” who longs to be an actress. Though Nina’s life is painful and disappointing, at the end of the play her suffering gives her depth and strength. Rebecca, too, ends the story in full flower. “Rather suddenly, after her grandmother’s death,” Sacks writes, she becomes decisive, joining a theatre group and appearing to him as “a complete person, poised, fluent,” a “natural poet.” The case study is presented as an ode to the power of understanding a patient’s life as a narrative, not as a collection of symptoms. But in the transcripts of their conversations—at least the ones saved from the year that followed, as well as Sacks’s journals from that period—Rebecca never joins a theatre group or emerges from her despair. She complains that it’s “better that I shouldn’t have been born,” that she is “useless,” “good for nothing,” and Sacks vehemently tries to convince her that she’s not. Instead of bearing witness to her reality, he reshapes it so that she, too, awakens.

Some of the most prominent nonfiction writers of Sacks’s era (Joseph Mitchell, A. J. Liebling, Ryszard Kapuściński) also took liberties with the truth, believing that they had a higher purpose: to illuminate the human condition. Sacks was writing in that spirit, too, but in a discipline that depends on reproducible findings. The “most flagrant example” of his distortions, Sacks wrote in his journal, was in one of the last chapters of “Hat,” titled “The Twins,” about twenty-six-year-old twins with autism who had been institutionalized since they were seven. They spend their days reciting numbers, which they “savored, shared” while “closeted in their numerical communion.” Sacks lingers near them, jotting down the numbers, and eventually realizes that they are all prime. As a child, Sacks used to spend hours alone, trying to come up with a formula for prime numbers, but, he wrote, “I never found any Law or Pattern for them—and this gave me an intense feeling of Terror, Pleasure, and—Mystery.” Delighted by the twins’ pastime, Sacks comes to the ward with a book of prime numbers which he’d loved as a child. After offering his own prime number, “they drew apart slightly, making room for me, a new number playmate, a third in their world.” Having apparently uncovered the impossible algorithm that Sacks had once wished for, the twins continue sharing primes until they’re exchanging ones with twenty digits. The scene reads like a kind of dream: he has discovered that human intimacy has a decipherable structure, and identified a hidden pattern that will allow him to finally join in.

Before Sacks met them, the twins had been extensively studied because of their capacity to determine the day of the week on which any date in the calendar fell. In the sixties, two papers in the American Journal of Psychiatry provided detailed accounts of the extent of their abilities. Neither paper mentioned a gift for prime numbers or math. When Sacks wrote Alexander Luria, a Russian neuropsychologist, about his work with the twins, in 1973, he also did not mention any special mathematical skills. In 2007, a psychologist with a background in learning theory published a short article in the Journal of Autism and Developmental Disorders , challenging Sacks’s assertion that these twins could spontaneously generate large prime numbers. Because this is not something that humans can reliably do, Sacks’s finding had been widely cited, and was theoretically “important for not only psychologists but also for all scientists and mathematicians,” the psychologist wrote. (The psychologist had contacted Sacks to ask for the title of his childhood book of prime numbers, because he couldn’t find a book of that description, but Sacks said that it had been lost.) Without pointing to new evidence, another scientist wrote in Sacks’s defense, describing his case study as “the most compelling account of savant numerosity skills” and arguing, “This is an example of science at the frontier, requiring daring to advance new interpretations of partial data.”

After the publication of “Hat,” when Sacks was fifty-two years old, he wrote his friend Robert Rodman, a psychoanalyst, that “Shengold suggested, with some hesitancy, some months ago, that I should consider going deeper with him.” He added, “He also observes that I don’t complain, say, of sexual deprivation—though this is absolute.” At first, Sacks was worried that Shengold was preparing to dismiss him from treatment: “I’ve done all I can for you—now manage on your own!” Then he felt hopeful that he didn’t need to assume that “boredom-depression-loneliness-cutoffness” would define the rest of his life. He was also moved that, after twenty years, Shengold still considered him “worth extra work.”

But Sacks was shaken by the idea that they’d only been skimming the surface. He looked back through his notebooks and noticed “a perceptible decline in concern and passion,” which he felt had also dulled the quality of his thought. “Is the superficiality of my work, then, due to superficiality of relationships—to running away from whatever has deeper feeling and meaning?” he asked Rodman. “Is this perhaps spoken of, in a camouflaged way, when I describe the ‘superficialization’ of various patients?” As an example, he referenced an essay in “Hat” about a woman with a cerebral tumor. She was intelligent and amusing but seemed not to care about anyone. “Was this the ‘cover’ of some unbearable emotion?” he writes in the essay.

Sacks felt that Shengold was the reason he was still alive, and that he should go further with him. “What have I to lose?” he asked Rodman. But, he wrote, “what one has to lose, of course, may be just that quasi-stable if fragile ‘functioning’ . . . so there is reason to hesitate.” Going deeper would also mean more fully submitting to someone else’s interpretation, experiencing what he asked of his own patients; Rodman proposed that Sacks was “afraid of the enclosure of analysis, of being reduced and fixed with a formulated phrase.”

Two men standing together in a field

Sacks and his partner, Bill Hayes. Photograph courtesy Oliver Sacks Foundation

In the early eighties, Lawrence Weschler, then a writer for The New Yorker , began working on a biography of Sacks. Weschler came to feel that Sacks’s homosexuality was integral to his work, but Sacks didn’t want his sexuality mentioned at all, and eventually asked him to stop the project. “I have lived a life wrapped in concealment and wracked by inhibition, and I can’t see that changing now,” he told Weschler. In his journal, Sacks jotted down thoughts to share with Weschler on the subject: “My ‘sex life’ (or lack of it) is, in a sense irrelevant to the . . . sweep of my mind .” In another entry, he wrote that the Freudian term “sublimation” diminished the process he’d undergone. When he was still having sex, as a young man in California, he used to sheath his body in leather gear, so he was “totally encased, enclosed,” his real self sealed in a kind of “black box.” He wrote, “I have, in a sense , ‘outgrown’ these extraordinary, almost convulsive compulsions—but this detachment has been made possible by incorporating them into a vast and comprehending view of the world.” (Weschler became close friends with Sacks, and, after Sacks died, published a “biographical memoir” titled “And How Are You , Dr. Sacks?”)

It’s unclear whether Sacks did “go deeper” with Shengold. In the late eighties, Sacks wrote in his journal that he was “scared, horrified (but, in an awful way, accepting or complaisant) about my non-life.” He likened himself to a “pithed and gutted creature.” Rather than living, he was managing a kind of “homeostasis.”

In 1987, Sacks had an intense friendship with a psychiatrist named Jonathan Mueller, with whom he briefly fell in love. Mueller, who was married to a woman, told me that he did not realize Sacks had romantic feelings for him. Sacks eventually moved on. But he felt that the experience had altered him. “I can read ‘love stories’ with empathy and understanding—I can ‘ enter into them ’ in a way which was impossible before,” he wrote in his journal. He perceived, in a new light, what it meant for his patients in “Awakenings” to glimpse the possibility of “liberation”: like him, he wrote, they were seeking “not merely a cure but an indemnification for the loss of their lives.”

By the nineties, Sacks seemed to ask less of himself, emotionally, in relation to his patients. He had started working with Kate Edgar, who’d begun as his assistant but eventually edited his writing, organized his daily life, and became a close friend. (Shengold had encouraged Sacks to find someone to assist with his work. “The secretary is certainly an important ‘ego-auxiliary,’ ” he wrote him in a letter.) Edgar was wary about the way Sacks quoted his patients—they were suspiciously literary, she thought—and she checked to make sure he wasn’t getting carried away. She spent hours with some of his patients, and, she told me, “I never caught him in anything like that, which actually surprises me.”

Weschler told me that Sacks used to express anxiety about whether he’d distorted the truth. Weschler would assure him that good writing is not a strict account of reality; there has to be space for the writer’s imagination. He said he told Sacks, “Come on, you’re extravagantly romanticizing how bad you are—just as much as you were extravagantly romanticizing what the patient said. Your mother’s accusing voice has taken over.” Weschler had gone to Beth Abraham Hospital to meet some of the patients from “Awakenings” and had been shaken by their condition. “There’s a lot of people shitting in their pants, drooling—the sedimentation of thirty years living in a warehouse,” he said. “His genius was to see past that, to the dignity of the person. He would talk to them for an hour, and maybe their eyes would brighten only once—the rest of the time their eyes were cloudy—but he would glom onto that and keep talking.”

After “Hat,” Sacks’s relationship with his subjects became more mediated. Most of them were not his patients; many wrote to him after reading his work, recognizing themselves in his books. There was a different power dynamic, because these people already believed that they had stories to tell. Perhaps the guilt over liberties he had taken in “Hat” caused him to curb the impulse to exaggerate. His expressions of remorse over “making up, ‘enhancing,’ etc,” which had appeared in his journals throughout the seventies and eighties, stopped. In his case studies, he used fewer and shorter quotes. His patients were far more likely to say ordinary, banal things, and they rarely quoted literature. They still had secret gifts, but they weren’t redeemed by them; they were just trying to cope.

In “An Anthropologist on Mars,” from 1992, a book of case studies about people compensating for, and adapting to, neurological conditions, some of the richest passages are the ones in which Sacks allows his incomprehension to become part of the portrait. In a chapter called “Prodigies,” he wants badly to connect with a thirteen-year-old boy named Stephen, who is autistic and has an extraordinary ability to draw, but Stephen resists Sacks’s attempts at intimacy. He will not allow himself to be romanticized, a refusal that Sacks ultimately accepts: “Is Stephen, or his autism, changed by his art? Here, I think, the answer is no.” In this new mode, Sacks is less inclined to replace Stephen’s unknowable experience with his own fantasy of it. He is open about the discomfort, and even embarrassment, of his multiple failures to reach him: “I had hoped, perhaps sentimentally, for some depth of feeling from him; my heart had leapt at the first ‘Hullo, Oliver!’ but there had been no follow-up.”

Mort Doran, a surgeon with Tourette’s syndrome whom Sacks profiled in “Anthropologist,” told me that he was happy with the way Sacks had rendered his life. He said that only one detail was inaccurate—Sacks had written that the brick wall of Doran’s kitchen was marked from Doran hitting it during Tourette’s episodes. “I thought, Why would he embellish that? And then I thought, Maybe that’s just what writers do.” Doran never mentioned the error to Sacks. He was grateful that Sacks “had the gravitas to put it out there to the rest of the world and say, ‘These people aren’t all nuts or deluded. They’re real people.’ ”

The wife in the title story of “Hat” had privately disagreed with Sacks about the portrayal of her husband, but for the most part Sacks appeared to have had remarkable relationships with his patients, corresponding with them for years. A patient called Ray, the subject of a 1981 piece about Tourette’s syndrome, told me that Sacks came to his son’s wedding years after his formal treatment had ended. Recalling Sacks’s death, he found himself suddenly crying. “Part of me left,” he said. “Part of my self was gone.”

A year after “Awakenings” was published, Sacks broke his leg in Norway, and Leonard L. and his mother wrote him a get-well letter. Thirty-two patients added their names, their signatures wavering. “Everybody had been counting the days for your return, so you can imagine the turmoil when they heard the news,” Leonard’s mother wrote. She explained that “most of the patients are not doing so well without your help and interest.” She added that Leonard “isn’t doing too well either.” When Leonard learned that Sacks wouldn’t be back, she said, “he shed enough tears to fill a bucket.”

Sacks spoke of “animating” his patients, as if lending them some of his narrative energy. After living in the forgotten wards of hospitals, in a kind of narrative void, perhaps his patients felt that some inaccuracies were part of the exchange. Or maybe they thought, That’s just what writers do. Sacks established empathy as a quality every good doctor should possess, enshrining the ideal through his stories. But his case studies, and the genre they helped inspire, were never clear about what they exposed: the ease with which empathy can slide into something too creative, or invasive, or possessive. Therapists—and writers—inevitably see their subjects through the lens of their own lives, in ways that can be both generative and misleading.

In his journal, reflecting on his work with Tourette’s patients, Sacks described his desire to help their illness “reach fruition,” so that they would become floridly symptomatic. “With my help and almost my collusion, they can extract the maximum possible from their sickness—maximum of knowledge, insight, courage,” he wrote. “Thus I will FIRST help them to get ill, to experience their illness with maximum intensity; and then, only then , will I help them get well!” On the next line, he wrote, “IS THIS MONSTROUS?” The practice came from a sense of awe, not opportunism, but he recognized that it made him complicit, as if their illness had become a collaboration. “An impulse both neurotic and intellectual (artistic) makes me get the most out of suffering ,” he wrote. His approach set the template for a branch of writing and thinking that made it seem as if the natural arc of illness involved insight and revelation, and even some poetry, too.

In his journals, Sacks repeatedly complained that his life story was over. He had the “feeling that I have stopped doing, that doing has stopped, that life itself has stopped, that it is petering out in a sort of twilight of half-being,” he wrote, in 1987. His journals convey a sense of tangible boredom. He transcribed long passages from philosophers and theologists (Simone Weil, Søren Kierkegaard, Gottfried Wilhelm Leibniz, Dietrich Bonhoeffer) and embarked on disquisitions on the best definition of reality, the “metabolism of grace,” the “deep mystery of incubation.” His thoughts cast outward in many directions—notes for a thousand lectures—then tunnelled inward to the point of non-meaning. “Where Life is Free, Immaterial, full of Art,” he wrote, “the laws of life, of Grace, are those of Fitness .”

Sacks proposed various theories for why he had undergone what he called “psychic death.” He wondered if he had become too popular, merely a fuzzy symbol of compassionate care. “Good old Sacks—the House Humanist,” he wrote, mocking himself. He also considered the idea that his four decades of analysis were to blame. Was it possible, he wrote, that a “vivisection of inner life, however conceived, however subtle and delicate, may in fact destroy the very thing it examines?” His treatment with Shengold seemed to align with a life of “homeostasis”—intimacy managed through more and more language, in a contained, sterile setting, on Monday and Wednesday mornings, from 6:00 to 6:45 A . M . They still referred to each other as “Dr. Sacks” and “Dr. Shengold.” Once, they ran into each other at a chamber concert. They were a few rows apart, but they didn’t interact. Occasionally, Shengold told his children that he “heard from the couch” about a good movie or play, but he never shared what happened in his sessions. They inferred that Sacks was their father’s patient after reading the dedication to him in “Hat.”

As Sacks aged, he felt as if he were gazing at people from the outside. But he also noticed a new kind of affection for humans—“homo sap.” “They’re quite complex (little) creatures (I say to myself),” he wrote in his journal. “They suffer, authentically, a good deal. Gifted, too. Brave, resourceful, challenging.”

Perhaps because love no longer appeared to be a realistic risk—he had now entered a “geriatric situation”—Sacks could finally confess that he craved it. “I keep being stabbed by love,” he wrote in his journal. “A look. A glance. An expression. A posture.” He guessed that he had at least five, possibly ten, more years to live. “I want to, I want to ••• I dare not say. At least not in writing.”

In 2008, Sacks had lunch with Bill Hayes, a forty-seven-year-old writer from San Francisco who was visiting New York. Hayes had never considered Sacks’s sexuality, but, as soon as they began talking, he thought, “Oh, my God, he’s gay,” he told me. They lingered at the table for much of the afternoon, connecting over their insomnia, among other subjects. After the meal, Sacks wrote Hayes a letter (which he never sent) explaining that relationships had been “a ‘forbidden’ area for me—although I am entirely sympathetic to (indeed wistful and perhaps envious about) other people’s relationships.”

A year later, Hayes, whose partner of seventeen years had died of a heart attack, moved to New York. He and Sacks began spending time together. At Sacks’s recommendation, Hayes started keeping a journal, too. He often wrote down his exchanges with Sacks, some of which he later published in a memoir, “Insomniac City.”

“It’s really a question of mutuality, isn’t it?” Sacks asked him, two weeks after they had declared their feelings for each other.

“Love?” Hayes responded. “Are you talking about love?”

“Yes,” Sacks replied.

Sacks began taking Hayes to dinner parties, although he introduced him as “my friend Billy.” He did not allow physical affection in public. “Sometimes this issue of not being out became very difficult,” Hayes told me. “We’d have arguments, and I’d say things like ‘Do you and Shengold ever talk about why you can’t come out? Or is all you ever talk about your dreams?’ ” Sacks wrote down stray phrases from his dreams on a whiteboard in his kitchen so that he could report on them at his sessions, but he didn’t share what happened in therapy.

Kate Edgar, who worked for Sacks for three decades, had two brothers who were gay, and for years she had advocated for gay civil rights, organizing Pride marches for her son’s school. She intentionally found an office for Sacks in the West Village so that he would be surrounded by gay men living openly and could see how normal it had become. She tended to hire gay assistants for him, for the same reason. “So I was sort of plotting on that level for some years,” she told me.

In 2013, after being in a relationship with Hayes for four years—they lived in separate apartments in the same building—Sacks began writing a memoir, “On the Move,” in which he divulged his sexuality for the first time. He recounts his mother’s curses upon learning that he was gay, and his decades of celibacy—a fact he mentions casually, without explanation. Edgar wondered why, after so many years of analysis, coming out took him so long, but, she said, “Oliver did not regard his relationship with Shengold as a failure of therapy.” She said that she’d guessed Shengold had thought, “This is something Oliver has to do in his own way, on his own time.” Shengold’s daughter, Nina, said that, “for my dad to have a patient he loved and respected finally find comfort in identifying who he’d been all his life—that’s growth for both of them.”

A few weeks after finishing the manuscript, Sacks, who’d had melanoma of the eye in 2005, learned that the cancer had come back, spreading to his liver, and that he had only months to live. He had tended toward hypochondria all his life, and Edgar thought that the diagnosis might induce a state of chronic panic. Since he was a child, Sacks had had a horror of losing things, even irrelevant objects. He would be overcome by the “feeling that there was a hole in the world ,” he wrote in his journal, and the fear that “I might somehow fall through that hole-in-the-world, and be absolutely, inconceivably lost.” Edgar had dealt for decades with his distress over lost objects, but she noticed that now, when he misplaced things, he didn’t get upset. He had an uncharacteristic ease of being.

In the summer of 2015, before Shengold went on his annual summer break, Sacks said to Edgar, “If I’m alive in September when Shengold returns, I’m not sure I need to go back to my sessions.” They had been seeing each other for forty-nine years. Sacks was eighty-two; Shengold was eighty-nine.

When Sacks was struggling with his third book, “A Leg to Stand On,” which was about breaking his leg and his frustration that his doctors wouldn’t listen to him, he wrote in his journal that Shengold had suggested (while apologizing for the corniness of the phrase) that the book should be “a message of love”—a form of protest against the indifference that so many patients find in their doctors. Shengold may have been giving Sacks permission to see their own relationship—the one place in which Sacks felt an enduring sense of recognition and care—as a hidden subject of the book. Extending Shengold’s idea, Sacks wrote, of his book, “The ‘moral’ center has to do with . . . the irreducible ultimate in doctor-patient relations.”

In August, two weeks before Sacks died, he and Shengold spoke on the phone. Shengold was with his family at a cottage in the Finger Lakes region of central New York, where he spent every summer. Nina told me, “We all gathered in the living room of that little cottage and put my father on speakerphone. Oliver Sacks was clearly on his deathbed—he was not able to articulate very well. Sometimes his diction was just gone. Dad kept shaking his head. He said, ‘I can’t understand you. I’m so sorry, I can’t understand you.’ ” At the end of the call, Shengold told Sacks, “It’s been the honor of my life to work with you,” and said, “Goodbye, Oliver.” Sacks responded, “Goodbye, Leonard.” It was the first time they had ever used each other’s first names. When they hung up, Shengold was crying.

After Sacks died, Shengold started closing down his practice. “It was the beginning of the end for him,” his son David told me. “He had lost most of his colleagues. He was really the last of his generation.” Nina said, “I do think part of why my father lived so long and was able to work so long was because of that relationship. That feeling of affection and kindred spirit was lifesaving.”

In “Awakenings,” when describing how Leonard L.—his “ ‘ideal’ patient”—initially responded to L-dopa, Sacks characterizes him as “a man released from entombment” whose “predominant feelings at this time were feelings of freedom, openness, and exchange with the world.” He quotes Leonard saying, “I have been hungry and yearning all my life . . . and now I am full.” He also says, “I feel saved. . . . I feel like a man in love. I have broken through the barriers which cut me off from love.’ ”

For years, Sacks had tested the possibility of awakenings in others, as if rehearsing, or outsourcing, the cure he had longed to achieve with Shengold. But at the end of his life, like an inside-out case study, he inhabited the story he’d imagined for his patients. “All of us entertain the idea of another sort of medicine . . . which will restore us to our lost health and wholeness,” he wrote, in “Awakenings.” “We spend our lives searching for what we have lost; and one day, perhaps, we will suddenly find it.” ♦

"Honor Our History": Trump Slammed for Ending Free National Park Entry on Juneteenth & MLK Day

Democracy Now!
www.democracynow.org
2025-12-09 13:30:31
The Trump administration is facing backlash after ending free admission at national parks on the only two federal holidays honoring Black history — Juneteenth and Martin Luther King Jr. Day — while adding free entry on President Trump’s birthday, June 14. The Interior Department also announced...
Original Article

The Trump administration is facing backlash after ending free admission at national parks on the only two federal holidays honoring Black history — Juneteenth and Martin Luther King Jr. Day — while adding free entry on President Trump’s birthday, June 14. The Interior Department also announced higher entry fees for non-U.S. residents under what it calls “America-first entry fee policies.”

Denigrating Black history “can’t erase the truth,” says Carolyn Finney, who served on the National Parks Advisory Board during the Obama administration. “It’s not going to change how we feel, not just as Black Americans, but Americans in general, about honoring our history.”

We also speak with Audrey Peterman, author of Our True Nature: Finding a Zest for Life in the National Park System , who says “the entire history of America, the entire history of every racial and ethnic group in America, is in the national park system.”


Please check back later for full transcript.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

"Merger Madness": Trump at Center of Rival Netflix-Paramount Bids for Warner Bros.

Democracy Now!
www.democracynow.org
2025-12-09 13:16:10
President Donald Trump says he will be personally involved in the potential sale of Warner Bros. Discovery, with two enormous buyout offers on the table that risk further exacerbating U.S. media concentration. Netflix announced an $83 billion deal last week to buy Warner Bros. Discovery, which would...
Original Article

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN : Paramount has launched a hostile bid to take over Warner Bros. Discovery, just days after Netflix announced an $83 billion deal to acquire Warner’s studio and streaming assets, including HBO Max. Paramount’s largest shareholder is Larry Ellison, one of the world’s richest men and a close ally of President Trump. Paramount is attempting to fund the takeover of the Warner Bros. Discovery company with funding from sovereign wealth funds from Saudi Arabia, Abu Dhabi and Qatar, as well as Affinity Partners, the private equity fund led by Jared Kushner, Trump’s son-in-law.

Critics of media consolidation have warned against both Netflix’s offer and Paramount’s hostile bid. Actor and activist Jane Fonda, who recently relaunched her father’s free speech organization from the 1940s called the Committee for the First Amendment, published an op-ed in The Ankler last week headlined “The WBD Deal Puts Hollywood, and Democracy, at Risk.” She writes further consolidation will mean, quote, “fewer jobs, fewer opportunities to sell work, fewer creative risks, fewer news sources and far less diversity in the stories Americans get to hear. … And when only a handful of mega-companies control the entire pipeline, they gain the power to steamroll every guild — SAG - AFTRA , the WGA , the PGA , the DGA , IATSE , everyone — making it harder for workers to bargain, harder to stand up for themselves and harder to make a living at all.” The words of Jane Fonda.

The future of Warner Bros. may rest in part in the hands of federal regulators who must approve any merger. On Sunday night, prior to Paramount’s hostile bid, President Trump said a Netflix-Warner merger, quote, “could be a problem.”

PRESIDENT DONALD TRUMP : They have a very big market share. And when they have Warner Bros., you know, that share goes up a lot. So, I don’t know. That’s going to be for some economists to tell, and also — and I’ll be involved in that decision, too. … But it is a big market share. There’s no question about that. It could be a problem.

AMY GOODMAN : On Monday, after Paramount announced its hostile bid, President Trump was asked about the competing bids for Warner Bros., including the involvement of his son-in-law, Jared Kushner.

PRESIDENT DONALD TRUMP : I know — I know the companies very well. I know what they’re doing. But I have to see. I have to see what percentage of market they have. We have to see the Netflix percentage of market, Paramount, the percentage of market. I mean, none of them are particularly great friends of mine. You know, I just — I want to — I want to do what’s right. It’s so — it’s so very important to do what’s right.

REPORTER : The Paramount deal is supported by Jared Kushner, Mr. President. Would that impact your decision?

PRESIDENT DONALD TRUMP : If Paramount is? No, I don’t know. I haven’t — I’ve never spoken with him.

AMY GOODMAN : We’re joined now by Craig Aaron, the co- CEO of Free Press and Free Press Action, two media reform organizations — not to be confused with Bari Weiss’s The Free Press, which is now owned by Paramount. Craig’s most recent article is headlined “Stop the Merger Madness.” Free Press has also just published a new report headlined “Chokehold: Donald Trump’s War on Free Speech & the Need for Systemic Resistance.”

Craig Aaron, welcome back to Democracy Now! If you can respond to all that has happened in just a matter of days? There was enormous criticism of Netflix’s bid, and now, of course, you have this hostile bid by none other than President Trump’s son-in-law. Can you talk about all of this?

CRAIG AARON : Absolutely. Good to be with you, Amy and Juan.

This is really a damned-if-you-do, damned-if-you-don’t situation, where we have these giant companies taking — trying to take control of even more of what we watch, see, hear and read every day. So, Netflix, of course, if they could get this deal done, would dominate online streaming. Paramount itself has a huge streaming business and, of course, is a major movie studio. So, this is another situation where we’re talking about giant companies merging, spending billions and billions of dollars, all the lawyers and bankers getting rich.

But you don’t have to look very far in the past to understand that media consolidation after media consolidation deal are disastrous for the workers in these industries. They’re disastrous for the audience, who see prices go up and choices go down.

And pretty much every time, they’re disastrous for the businesses, as well. This is the third giant merger in recent time just to involve Warner Bros. You’ll go back to AOL Time Warner. We’ve had AT&T and Time Warner — all of these deals collapsing, falling apart, costing thousands and thousands of jobs, Warner Bros. Discovery itself the product of failed mergers.

And now we’re being told we need more concentration, more consolidation. And we have to be asking ourselves: Who does this serve? It seems to serve these executives. Maybe it serves Donald Trump. He seems very interested in competition for his favor, not very interested in actual competition when it comes to media, when it comes to entertainment.

JUAN GONZÁLEZ: And, Craig, you mentioned Trump. This spectacle of the president saying he will be involved in the decision —

CRAIG AARON : Unbelievable.

JUAN GONZÁLEZ: — on this merger one way or the other. Forget about the fact that his son-in-law is a participant, obviously, in one of the bids. How frequently have presidents in the past directly involved themselves in these kinds of merger decisions where the FCC is involved?

CRAIG AARON : Well, you know, certainly these are political fights, and so the White House might have a stake. They might have an interest in the outcome. But the idea that the president would be announcing that he’s personally going to be involved, the idea that someone as close to the president as a member of his own family could be poised to benefit from a decision of the administration, this would be completely unthinkable. And even the way this whole decision is being made really does look like, you know, a Mafia-type situation, not anything that is recognizable in terms of policymaking.

Paramount and the Ellisons’ entire argument for why they should be the company that wins this bidding war essentially boils down to “Donald Trump likes us better.” And they’ve been very clear in trying to win over Trump. That’s why Jared Kushner is involved. They’re trying to win over Trump. They’re saying, “We want to control CNN . We’re going to make CNN better for you. Look what we’ve done at CBS , where we’ve muted 60 Minutes , where we’ve put Bari Weiss in charge of the news operation.” This is their entire package and selling point to the administration, is that if they go with Paramount Skydance, then that’s going to be good for Trump, because these media executives understand, unfortunately, that’s how you get things done in the Trump era, is you appeal to the ego of Trump, you flatter Trump, and you try to line Trump’s pockets or the pockets of those closest to him. That’s how business gets done in the Trump era.

JUAN GONZÁLEZ: And what are the next steps here? What agencies do have to have oversight? And you mentioned Paramount, but isn’t the head — the chief legal officer of Paramount, wasn’t he the head of the Antitrust Division of the Justice Department during Trump’s first term?

CRAIG AARON : That’s right, Juan. The revolving door is spinning. So, Makan Delrahim, who was the top antitrust enforcer under the first Trump administration, now, of course, he’s Paramount’s top lawyer trying to work the system to get approval for their hostile takeover, if it’s announced.

So, there are a lot of things that are going to happen in the weeks ahead. I would remind folks that every time a merger is announced, all the companies involved want to treat it like it’s a done deal, like it’s about to be finished. That is not the case. That’s just PR and spin. We’re looking at at least a year of evaluating this deal, and that’s not even including the fact that there are multiple suitors here trying to appeal to the Warner Bros. board, and now directly to their shareholders, in a hostile takeover.

But whatever deal Warner Bros. pursues, probably by the end of this month, by December 22nd, that would have to go before the Justice Department. That is going to be, I believe, who will review this deal at the federal level. Of course, the Justice Department is also not what it used to be even when Delrahim was there. You know, it has become an arm of the — a direct arm of the Trump administration, pursuing the Trump administration’s political goals. But if they actually follow the law, of course, they would have to scrutinize this deal. And really, looking at the basics there, there’s no way a deal like this should even be considered.

Now, it’s not just the Justice Department that will look here. State attorneys general could have a role that’s very important. If you’re the attorney general of California or New York, you should have a lot of interest in what is going to happen to this huge industry that is such a big part of your state. And this is a big enough deal that European regulators and others around the world are also going to be scrutinizing it, because it is such a ginormous, multibillion-dollar deal between these huge companies, that, you know, really will reshape the entertainment industry as we know it.

JUAN GONZÁLEZ: And I wanted to ask you about another merger under consideration, this one by the FCC . That’s of Nexstar, the country’s largest owner of TV stations, with a competitor, TEGNA . Tell us — Nexstar is pretty well known as a very conservative company, isn’t it?

CRAIG AARON : That’s right, and especially during the Trump administration. Nexstar has been collecting hundreds of television stations across the country in this wave of media consolidation. They are now going to the Federal Communications Commission and asking them to waive laws, to actually overturn explicit instructions from Congress that limit the amount of television stations — the size of the audience one television chain can reach. They’re trying to get that thrown out so they can take over TEGNA .

And they’ve done that, again, by appealing to the Trump administration, by promising to crack down on critical journalism and by doing things like taking Jimmy Kimmel off the air. This was one of the companies, along with Sinclair Broadcasting, another very partisan broadcaster, that when the FCC chairman went on a podcast and started complaining about jokes Jimmy Kimmel was making about Charlie Kirk, it was Nexstar rushing to immediately yank him, pull him off the air, while having this multibillion-dollar deal before the FCC . So, again, we have the media executives seeing that the way to get ahead in the Trump administration is appeal to Trump.

We have the Trump administration abusing its power, really shaking down these companies, demanding loyalty, demanding they erase and eliminate their diversity and equity programs, and demanding that, in some cases, they pay off the administration through specious lawsuits, or maybe it’s they offer big movie contracts to the president’s wife or do favors for his son-in-law’s equity firm. This is the way the Trump administration has been working. This is what they’re pursuing to try to do a takeover of the media and make sure the mainstream, dominant media is really only there to serve Trump.

And you can see that in all of these deals. You know, over the weekend, Trump expressed skepticism of Netflix, but then, when 60 Minutes interviewed Marjorie Taylor Greene, well, maybe he doesn’t like CBS and Paramount so much anymore. This is the game they’re playing. It’s all about control and dominance and one narrative. And unfortunately, media executives in broadcast, in cable, in Hollywood, instead of fighting back against this infringement on free speech and freedom, have simply capitulated, hoping it will get their deals done.

AMY GOODMAN : Craig Aaron, I want to thank you for being with us, co- CEO of Free Press and Free Press Action. We’ll link to your new report titled “Chokehold: Donald Trump’s War on Free Speech & the Need for Systemic Resistance.” His new article is headlined “Stop the Merger Madness.” We’ll link to it at democracynow.org.

Next up, outcry is growing after the Trump administration drops free admission to national parks on the only two federal holidays honoring Black history: Juneteenth and Martin Luther King Day. Instead, the parks will be free on Donald Trump’s birthday. Stay with us.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

The world needs social sovereignty

Lobsters
blog.joinmastodon.org
2025-12-09 13:14:50
Comments...
Original Article

Elon Musk’s X platform has blocked the European Commission from making advertisements , presumably in response to the €120 million fine for its misleading verification system and overall lack of transparency. We’re grateful to Elon Musk for proving once again why the world needs to log off corporate-owned, centrally-controlled social media platforms and log on to a better way of being online. The world needs an open social web through the fediverse and Mastodon.

Calls for public institutions to invest in digital sovereignty are increasing across civil society. The term digital sovereignty means that an institution has autonomy and control over the critical digital infrastructure, data, and services that make up their online presence. Up until this point, social media has not been a part of this conversation. We think it is time to change that.

In any free society, it is the right of every citizen to access and comment on the news, decisions, and reasonings of their government. We believe it is a government’s responsibility to ensure this right for its constituents. Public institutions should communicate with their citizens on open platforms, not ones that require creating an account and sending personal data to a self-serving tech company. Today, institutions often communicate through the censorious filter of corporations that do not have the best interests of people or society at heart. They let their message be governed by the whims of out-of-touch and overpaid people who believe they should have unchecked power. We cannot let this stand. Mastodon offers a path forward for any institution that wants to take control of their communications, and we can help you get started today.

One of the tools these corporate social media platforms use to control an institution’s communications is the algorithm. Platforms strategically tune their algorithms to make it difficult, if not impossible, for institutions to reach their people without paying the platform ad money. Musk’s move to turn off the European Commission’s advertising capabilities feels like a perverse power play over a legitimate fine, one that effectively silences a crucial avenue for public discourse. We should be horrified that any single individual can wield such influence over the relationship between governments and the people they represent. We should be especially concerned when that individual doesn’t think our governments should exist in the first place.

post-screenshot.png

Mastodon’s chronological timeline means that no institution needs to game an algorithm to keep their people informed. By using hashtags, it’s easy for people who care about the topics you discuss to find you. What’s more, your constituents don’t need to be on Mastodon to follow your posts. They can subscribe via open protocols like RSS and soon via email . When it comes to the source of the fine in the first place—X’s infamous blue checks, a.k.a. verification—Mastodon also offers a better way. We empower people to verify themselves by linking their social profile to their official (or personal) website. This allows for greater transparency and trust than relying on the often less-than-reputable verification practices of a single corporate entity, especially one that is willing to sell reputation for a low monthly fee. (Meanwhile, another corporate social media platform made $16 billion, 10% of their 2024 revenue, from advertisements for scams and banned goods .)

In an era where information is power, it’s disheartening to see our institutions yield so much to the whims of industry and individuals. In contrast, the European Commission is leading the way in taking ownership of social sovereignty on behalf of their people. They own a Mastodon instance, ec.social-network.europa.eu , to reach Europeans directly and keep them well informed. Mastodon is proud to help them manage the technical side of things. If you are someone on the fediverse who would like to see their government own their social sovereignty, we encourage you to get in touch with your local representative and tell them why you think they should start using open social media networks like the fediverse. We’re starting a thread on Mastodon of resources to help you get in touch with your local representative here .

By making the news and truth contingent on advertising budgets we’ve created an environment where any narrative can win, as long as the storyteller is willing to pay. If we allow these conditions to continue, we will leave behind the voices that truly matter; the people and their public institutions. It is critical that those voices not be silenced forever. The promise of the fediverse is the promise of a better way forward: free from ads and manipulative algorithms, a place built by and for people like you, where our sovereignty is a right and not a privilege.

It will take all of us working together to build a better way of being online. If you want to start an instance or have ideas about how we can encourage more institutions to take control of their social sovereignty, get in touch us at hello@joinmastodon.org .

Designing Rust FDB Workloads That Actually Find Bugs

Lobsters
pierrezemb.fr
2025-12-09 13:14:22
Comments...
Original Article

After one trillion CPU-hours of simulation testing , FoundationDB has been stress-tested under conditions far worse than any production environment. Network partitions, disk failures, Byzantine faults. FDB handles them all. But what about your code? Your layer sits on top of FDB. Your indexes, your transaction logic, your retry handling. How do you know it survives chaos?

At Clever Cloud, we are building Materia , our serverless database product. The question haunted us: how do you ship layer code with the same confidence FDB has in its own? Our answer was to hack our way into FDB's simulator using foundationdb-simulation , a crate that compiles Rust to run inside FDB's deterministic simulator. We're the only language besides Flow that can pull this off.

The first seed triggered commit_unknown_result , one of the most feared edge cases for FDB layer developers. When a connection drops, the client can't know if the transaction committed. Our atomic counters were incrementing twice. In production, this surfaces once every few months under heavy load and during failures. In simulation? Almost immediately.

This post won't walk you through the code mechanics. The foundationdb-simulation crate and its README cover that. Instead, this teaches you how to design workloads that catch real bugs. Whether you're a junior engineer or an LLM helping write tests, these principles will guide you.

🔗 Why Autonomous Testing Works

Traditional testing has you write specific tests for scenarios you imagined. But as Will Wilson put it at Bug Bash 2025 : "The most dangerous bugs occur in states you never imagined possible." The key insight of autonomous testing (what FDB's simulation embodies) is that instead of writing tests, you write a test generator . If you ran it for infinite time, it would eventually produce all possible tests you could have written. You don't have infinite time, so instead you get a probability distribution over all possible tests. And probability distributions are leaky: they cover cases you never would have thought to test.

This is why simulation finds bugs so fast. You're not testing what you thought to test. You're testing what the probability distribution happens to generate, which includes edge cases you'd never have written explicitly. Add fault injection (a probability distribution over all possible ways the world can conspire to screw you) and now you're finding bugs that would take months or years to surface in production.

This is what got me interested in simulation in the first place: how do you test the things you see during on-call shifts? Those weird transient bugs at 3 AM, the race conditions that happen once a month, the edge cases you only discover when production is on fire. Simulation shifts that complexity from SRE time to SWE time. What was a 3 AM page becomes a daytime debugging session. What was a high-pressure incident becomes a reproducible test case you can bisect, rewind, and experiment with freely.

🔗 The Sequential Luck Problem

Here's why rare bugs are so hard to find: imagine a bug that requires three unlikely events in sequence. Each event has a 1/1000 probability. Finding that bug requires 1/1,000,000,000 attempts, roughly a billion tries with random testing. Research confirms this: a study of network partition failures found that 83% require 3+ events to manifest, 80% have catastrophic impact, and 21% cause permanent damage that persists after the partition heals. But here's the good news for Rust workloads : you don't solve this problem yourself. FDB's simulation handles fault injection. BUGGIFY injects failures at arbitrary code points. Network partitions appear and disappear. Disks fail. Machines crash and restart. The simulator explores failure combinations that would take years to encounter in production.

Your job is different. You need to design operations that exercise interesting code paths. Not just reads and writes, but the edge cases your users will inevitably trigger. And you need to write invariants that CATCH bugs when simulation surfaces them. After a million injected faults, how do you prove your data is still correct? This division of labor is the key insight: FDB injects chaos, you verify correctness.

🔗 Designing Your Operation Alphabet

The operation alphabet is the complete set of operations your workload can perform. This is where most workloads fail: they test happy paths with uniform distribution and miss the edge cases that break production. Think about three categories:

Normal operations with realistic weights. In production, maybe 80% of your traffic is reads, 15% is simple writes, 5% is complex updates. Your workload should reflect this, because bugs often hide in the interactions between operation types. A workload that runs 50% reads and 50% writes tests different code paths than one that runs 95% reads and 5% writes. Both might be valid, but they'll find different bugs.

Adversarial inputs that customers will inevitably send. Empty strings. Maximum-length values. Null bytes in the middle of strings. Unicode edge cases. Boundary integers (0, -1, MAX_INT). Customers never respect your API specs, so model the chaos they create.

Nemesis operations that break things on purpose. Delete random data mid-test. Clear ranges that "shouldn't" be cleared. Crash batch jobs mid-execution to test recovery. Run compaction every operation instead of daily. Create conflict storms where multiple clients hammer the same key. Approach the 10MB transaction limit. These operations stress your error handling and recovery paths. The rare operations are where bugs hide. That batch job running once a day in production? In simulation, you'll hit its partial-failure edge case in minutes, but only if your operation alphabet includes it.

🔗 Designing Invariants

After simulation runs thousands of operations with injected faults, network partitions, and machine crashes, how do you know your data is still correct? Unlike FDB's internal testing, Rust workloads can't inject assertions at arbitrary code points. You verify correctness in the check() phase, after the chaos ends. The key question: "After all this, how do I PROVE my data is still correct?"

One critical tip: validate during start() , not just in check() . Don't wait until the end to discover corruption. After each operation (or batch of operations), read back the data and verify it matches expectations. If you're maintaining a counter, read it and check the bounds. If you're building an index, query it immediately after insertion. Early validation catches bugs closer to their source, making debugging far easier. The check() phase is your final safety net, but continuous validation during execution is where you'll catch most issues.

An invariant is just a property that must always hold, no matter what operations ran. If you've seen property-based testing, it's the same idea: instead of assertFalse(new User(GUEST).canUse(SAVED_CARD)) , you write assertEquals(user.isAuthenticated(), user.canUse(SAVED_CARD)) . The first tests one case. The second tests a rule that holds for all cases.

Four patterns dominate invariant design:

Reference Models maintain an in-memory copy of expected state. Every operation updates both the database and the reference model. In check() , you compare them. If they diverge, something went wrong. Use BTreeMap (not HashMap ) for deterministic iteration. This pattern works best for single-client workloads where you can track state locally.

Conservation Laws track quantities that must stay constant. Inventory transfers between warehouses shouldn't change total inventory. Money transfers between accounts shouldn't create or destroy money. Sum everything up and verify the conservation law holds. This pattern is elegant because it doesn't require tracking individual operations, just the aggregate property.

Structural Integrity verifies data structures remain valid. If you maintain a secondary index, verify every index entry points to an existing record and every record appears in the index exactly once. If you maintain a linked list in FDB, traverse it and confirm every node is reachable. The cycle validation pattern (creating a circular list where nodes point to each other) is a classic technique from FDB's own Cycle workload . After chaos, traverse the cycle and verify you visit exactly N nodes.

Operation Logging solves two problems at once: maybe_committed uncertainty and multi-client coordination. The trick from FDB's own AtomicOps workload : log the intent alongside the operation in the same transaction . Write both your operation AND a log entry recording what you intended. Since they're in the same transaction, they either both commit or neither does. No uncertainty. For multi-client workloads, each client logs under its own prefix (e.g., log/{client_id}/ ). In check() , client 0 reads all logs from all clients, replays them to compute expected state, and compares against actual state. If they diverge, something went wrong, and you'll know exactly which operations succeeded. See the Rust atomic workload example for a complete implementation.

🔗 The Determinism Rules

FDB's simulation is deterministic. Same seed, same execution path, same bugs. This is the superpower that lets you reproduce failures. But determinism is fragile. Break it, and you lose reproducibility. Five rules to remember:

  1. BTreeMap, not HashMap : HashMap iteration order is non-deterministic
  2. context.rnd(), not rand::random() : All randomness must come from the seeded PRNG
  3. context.now(), not SystemTime::now() : Use simulation time, not wall clock
  4. db.run(), not manual retry loops : The framework handles retries and maybe_committed correctly
  5. No tokio::spawn() : The simulation runs on a custom executor, spawning breaks it

If you take nothing else from this post, memorize these. Break any of them and your failures become unreproducible. You'll see a bug once and never find it again.

🔗 Architecture: The Three-Crate Pattern

Real production systems use tokio, gRPC, REST frameworks, all of which break simulation determinism. You can't just drop your production binary into the simulator. The solution is separating your FDB operations into a simulation-friendly crate:

my-project/
├── my-fdb-service/      # Core FDB operations - NO tokio
├── my-grpc-server/      # Production layer (tokio + tonic)
└── my-fdb-workloads/    # Simulation tests

The service crate contains pure FDB transaction logic with no async runtime dependency. The server crate wraps it for production. The workloads crate tests the actual service logic under simulation chaos. This lets you test your real production code, not a reimplementation that might have different bugs.

🔗 Common Pitfalls

Beyond the determinism rules above, these mistakes will bite you:

Running setup or check on all clients. The framework runs multiple clients concurrently. If every client initializes data in setup() , you get duplicate initialization. If every client validates in check() , you get inconsistent results. Use if self.client_id == 0 to ensure only one client handles initialization and validation.

Forgetting maybe_committed. The db.run() closure receives a maybe_committed flag indicating the previous attempt might have succeeded. If you're doing non-idempotent operations like atomic increments, you need either truly idempotent transactions or automatic idempotency in FDB 7.3+. Ignoring this flag means your workload might count operations twice.

Storing SimDatabase between phases. Each phase ( setup , start , check ) gets a fresh database reference. Storing the old one leads to undefined behavior. Always use the db parameter passed to each method.

Wrapping FdbError in custom error types. The db.run() retry mechanism checks if errors are retryable via FdbError::is_retryable() . If you wrap FdbError in your own error type (like anyhow::Error or a custom enum), the retry logic can't see the underlying error and won't retry. Keep FdbError unwrapped in your transaction closures, or ensure your error type preserves retryability information.

Assuming setup is safe from failures. BUGGIFY is disabled during setup() , so you might think transactions can't fail. But simulation randomizes FDB knobs, which can still cause transaction failures. Always use db.run() with retry logic even in setup, or wrap your setup in a retry loop.

🔗 The Real Value

That commit_unknown_result edge case appeared on our first simulation seed. In production, we'd still be hunting it months later. 30 minutes of simulation covers what would take 24 hours of chaos testing. But the real value of simulation testing isn't just finding bugs, it's forcing you to think about correctness. When you design a workload, you're forced to ask: "What happens when this retries during a partition?" "How do I verify correctness when transactions can commit in any order?" "What invariants must hold no matter what chaos occurs?" Designing for chaos becomes natural. And if it survives simulation, it survives production.


Feel free to reach out with questions or to share your simulation workloads. You can find me on Twitter , Bluesky or through my website .

Semi-Interactive Assembly Verification in Knuckledragger

Lobsters
www.philipzucker.com
2025-12-09 13:13:29
Comments...
Original Article
This link caused an XML parsing exception. If this link has an extension(''), maybe we should exclude it. Here's the link: https://www.philipzucker.com/asm_verify4/.

Australia’s world-first social media ban begins as millions of children and teens lose access to accounts

Guardian
www.theguardian.com
2025-12-09 13:01:04
Accounts held by users under 16 must be removed on apps that include TikTok, Facebook, Instagram, X, YouTube, Snapchat, Reddit, Kick, Twitch and Threads under banHow is Australia’s social media ban affecting you and your family?Get our breaking news email, free app or daily news podcastAustralia has...
Original Article

Australia has enacted a world-first ban on social media for users aged under 16 , causing millions of children and teenagers to lose access to their accounts.

Facebook, Instagram, Threads, X, YouTube, Snapchat, Reddit, Kick, Twitch and TikTok are expected to have taken steps from Wednesday to remove accounts held by users under 16 years of age in Australia, and prevent those teens from registering new accounts.

Platforms that do not comply risk fines of up to $49.5m.

There have been some teething problems with the ban’s implementation. Guardian Australia has received several reports of those under 16 passing the facial age assurance tests , but the government has flagged it is not expecting the ban will be perfect from day one.

All listed platforms apart from X had confirmed by Tuesday they would comply with the ban. The eSafety commissioner, Julie Inman Grant, said it had recently had a conversation with X about how it would comply, but the company had not communicated its policy to users.

Bluesky, an X alternative, announced on Tuesday it would also ban under-16s, despite eSafety assessing the platform as “low risk” due to its small user base of 50,000 in Australia.

Children had spent the past few weeks undertaking age assurance checks, swapping phone numbers and preparing for their accounts to be deactivated.

The Australian chief executive and co-founder of the age assurance service k-ID, Kieran Donovan, said his service had conducted hundreds of thousands of age checks in the past few weeks. The k-ID service was being used by Snapchat among others.

Parents of children affected by the ban shared a spectrum of views on the policy. One parent told the Guardian their 15-year-old daughter was “very distressed” because “all her 14 to 15-year-old friends have been age verified as 18 by Snapchat”. Since she had been identified as under 16, they feared “her friends will keep using Snapchat to talk and organise social events and she will be left out”.

Ezra is a teen quadriplegic. He says Australia’s social media ban will make him lonelier – video

Another parent said the ban had forced him to teach his child how to break the law. “I’ve shown her how VPNs work and other methods on bypassing age restrictions,” he said. “I’ve had to set her up with her own adult YouTube account and have assisted her in bypassing TikTok’s age-estimation and will keep doing so each time it asks.”

Others said the ban “can’t come quickly enough”. One parent said their daughter was “completely addicted” to social media and the ban “provides us with a support framework to keep her off these platforms”.

The Australian prime minister, Anthony Albanese, said in an opinion piece on Sunday : “From the beginning, we’ve acknowledged this process won’t be 100% perfect. But the message this law sends will be 100% clear … Australia sets the legal drinking age at 18 because our society recognises the ­benefits to the individual and the community of such an ­approach.

“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear, ­national standard.”

Polling has consistently shown that two-thirds of voters support raising the minimum age for social media to 16. The opposition, including leader Sussan Ley, have recently voiced alarm about the ban , despite waving the legislation through parliament and the former Liberal leader Peter Dutton championing it.

The ban has garnered worldwide attention, with several nations indicating they will adopt a ban of their own, including Malaysia, Denmark and Norway. The European Union passed a resolution to adopt similar restrictions, while a spokesperson for the British government told Reuters it was “closely monitoring Australia’s approach to age restrictions”.

Inman Grant told the Guardian that from Thursday, she would be sending notices to the platforms covered by the ban to find out how the implementation was progressing.

Questions included “how many accounts [they’ve] deactivated or removed, what challenges they’re finding, how they’re preventing recidivism and preventing circumvention, whether or not their abuse or reporting abuse and the appeals processes are working as planned”, she said.

skip past newsletter promotion

Albanese said the information gathered in this process would be made public.

The regulator would need to assess whether platforms were taking reasonable steps. If they were not, it could take that platform to court to seek fines.

There would be an independent evaluation of the ban conducted by an academic advisory group examining the short-term, medium-term and longer-term impacts of the ban.

“It will look at the benefits over time, but also the unintended consequences,” Inman Grant said.

“Everything from are sleeping? Are they interacting or are they actually getting out on the sports fields? Are they reading books? Are they taking less medication like antidepressants? Are their Naplan scores improving over time?” Inman Grant said.

Potential unintended consequences to be investigated included whether children were moving on to “darker areas of the internet”, learning how to bypass the bans through VPNs, or moving on to other platforms, she said.

Teens on Snapchat affected by the ban had been publicly sharing their mobile numbers in their profiles ahead of their accounts being shut down.

A spokesperson for Snapchat said the platform understood under-16s were disappointed by the ban but “would strongly encourage any teens using Snapchat not to publicly share their personal contact information”.

Inman Grant said she had sent notices to 15 companies not initially included in the ban, asking them to self-assess whether they should be.

Yope and Lemon8, which shot up the app store rankings as teens looked for alternatives, were among those contacted.

Headlines for December 9, 2025

Democracy Now!
www.democracynow.org
2025-12-09 13:00:00
Israeli Military Chief Says Gaza’s “Yellow Line” WIll Become “New Border” for Israel, U.N. Condemns Israeli Raid on UNRWA Headquarters in Occupied East Jerusalem, Supreme Court Signals It Will Grant Trump Power to Fire Independent Agency Heads, Trump Insults Another Fem...
Original Article

Headlines December 09, 2025

Watch Headlines

Israeli Military Chief Says Gaza’s “Yellow Line” WIll Become “New Border” for Israel

Dec 09, 2025

Israel’s military chief has told soldiers occupying the Gaza Strip that the “yellow line” dividing the Palestinian territory under President Trump’s ceasefire plan will become a “new border” for Israel. The comments by Lieutenant General Eyal Zamir come despite a provision in the October ceasefire deal stating that “Israel will not occupy or annex Gaza.” Such a move would give Israel control of more than half of Gaza’s territory, including farmland and the Rafah border crossing with Egypt.

Meanwhile, a new report by Reporters Without Borders finds Israel has killed more journalists in 2025 than any other country — for the third year running. The report found Israel’s military liable for the deaths of 29 Palestinian journalists, among 67 journalists killed around the world this year.

U.N. Condemns Israeli Raid on UNRWA Headquarters in Occupied East Jerusalem

Dec 09, 2025

In occupied East Jerusalem, the United Nations has condemned a raid by Israeli forces on the headquarters of the U.N.’s Relief and Works Agency for Palestine Refugees, known as UNRWA . A spokesperson for the secretary-general said the raid directly violated international law.

Stéphane Dujarric : “Police motorcycles, as well as trucks and forklifts, were brought in, and all communications were cut. Furniture, IT equipment and other property was seized, and the U.N. flag was pulled down and replaced by the Israeli flag. … This compound remains United Nations premises and is inviolable and immune from any other form of interference.”

Supreme Court Signals It Will Grant Trump Power to Fire Independent Agency Heads

Dec 09, 2025

The U.S. Supreme Court signaled Monday it’s prepared to make it easier for President Trump to fire independent government officials, despite laws barring the president from removing them without cause. On Monday, the court heard oral arguments in the case of Federal Trade Commission member Rebecca Kelly Slaughter, who was fired by the White House in March. The court’s right-wing majority cast doubt on a 90-year-old precedent known as Humphrey’s Executor, which grants a president the power to fire a board member only for “inefficiency, neglect of duty, or malfeasance in office.” Liberal Justice Sonia Sotomayor warned that move would “destroy the structure of government,” while Justice Elena Kagan warned it would grant the president near-unlimited power.

Justice Elena Kagan : “So, the result of what you want is that the president is going to have massive, unchecked, uncontrolled power not only to do traditional execution, but to make law through legislative and adjudicative frameworks.”

A ruling in the case is expected by June; until then, the Supreme Court has allowed the White House’s firing of Rebecca Kelly Slaughter and other commissioners to remain in effect.

Trump Insults Another Female Reporter as He Walks Back Support for Releasing Boat Strike Video

Dec 09, 2025

President Trump has walked back his remarks last week when asked if he would release video showing a series of strikes on an alleged drug boat in the Caribbean on September 2. Previously, Trump said he had “no problem” releasing the footage, but speaking to reporters Monday, Trump defended Defense Secretary Hegseth while insulting ABC’s Rachel Scott, who pressed him on whether he would release the full video.

Rachel Scott : “Are you committing to releasing the full video?”

President Donald Trump : “Didn’t I just tell you that?”

Rachel Scott : “You said that it was up to Secretary Hegseth.”

President Donald Trump : “You’re the most obnoxious reporter in the whole place. Let me just tell you, you are an obnoxious, a terrible — actually a terrible reporter. And it’s always the same thing with you. I told you: Whatever Pete Hegseth wants to do is OK with me.”

Recently, President Trump called CBS’s Nancy Cordes “stupid,” Katie Rogers from The New York Times “ugly,” and when Catherine Lucey of Bloomberg News asked him about releasing the Epstein files, Trump told her, “Quiet, piggy.”

Honduras Seeks to Arrest Ex-President and Narcotrafficker Juan Orlando Hernández After Trump Pardon

Dec 09, 2025

Despite claiming to target alleged drug boats in the Pacific and the Caribbean, President Trump has used his power to pardon about 100 people accused of drug-related crimes — that’s according to a Washington Post analysis. Just last week, Trump pardoned former Honduran President Juan Orlando Hernández, who was sentenced to 45 years in prison for conspiring to distribute more than 400 tons of cocaine in the U.S. On Monday night, the Honduran attorney general announced he had instructed his government and Interpol to arrest Hernández.

Meanwhile, a 2016 video unearthed by CNN of Defense Secretary Hegseth shows him repeatedly warning that the U.S. military should refuse “unlawful” orders from the President.

Pete Hegseth : “If you’re doing something that is just completely unlawful and ruthless, then there is a consequence for that. That’s why the military said it won’t follow unlawful orders from their commander-in-chief.”

Republicans Unveil Record $901 Billion Military Spending Bill

Dec 09, 2025

Republican lawmakers have unveiled a bill to authorize $901 billion in military spending for the next fiscal year. House Speaker Mike Johnson says the National Defense Authorization Act would “ensure our military forces remain the most lethal in the world.” In a statement, Public Citizen’s Robert Weissman responded, “The last person who should be entrusted with an even bigger budget is the dangerous and lawless Pete Hegseth. As Hegseth illegally targets civilian boats near Venezuela with expensive Hellfire missiles, wastefully and recklessly deploys the National Guard in cities around the country, and teases an unconstitutional and costly war, Congress should refuse to add a penny to his budget.”

Meanwhile, lawmakers have attached a provision to the NDAA that would withhold money from Hegseth’s travel budget if the Pentagon refuses to hand over video of the September 2 boat strike.

Trump Announces $12 Billion Aid Package to Farmers Hit Hard by Trade War

Dec 09, 2025

President Trump has announced a $12 billion aid package to farmers struggling from the devastating effects of his tariffs. Farm bankruptcies rose by nearly 50% this year compared to last year. President Trump’s tariffs on China cut its imports of U.S. soybeans to zero before a deal was reached in October. Democratic Senator Ron Wyden of Oregon criticized the bailout for farmers, saying, “Instead of proposing government handouts, Donald Trump should end his destructive tariff spree so American farmers can compete and win on a level playing field.”

ICE Points Guns at Crowd Protesting Arrest of Student at Augsburg University in Minneapolis

Dec 09, 2025

Image Credit: IG/ @mnicewatch

In Minnesota, students and staff at Augsburg University in Minneapolis say federal immigration agents pointed guns at a crowd that gathered to protest the arrest of an undergraduate student on Saturday. In a statement, the university wrote that the ICE agents lacked a signed judicial warrant, which is required for them to enter private buildings. It was just one of several reports of ICE agents physically threatening and wrongfully detaining people swept up in what ICE is calling “Operation Metro Surge,” targeting Minnesota’s Somali community, which President Trump described in a racist tirade last week as “garbage.”

Democratic Senator Murray Condemns ICE After Agents Release Attack Dog on Constituent

Dec 09, 2025

In Washington state, Democratic Senator Patty Murray is condemning an incident where ICE agents released an attack dog on one of her constituents last month in Vancouver. According to Senator Murray, Wilmer Toledo-Martinez suffered “horrific” injuries after an ICE agent lured him out of his home before violently arresting him. His neighbor, John Williams Sr., witnessed the attack; he spoke to TV station KGW .

John Williams Sr. : “His wife’s screaming. The kids in the car are screaming. I’m glad his 7-year-old daughter wasn’t here. The 2- and 3-year-old was here. And we were trying to ask what’s going on, and he’s telling her to 'Get back! Get back! Or we're gonna sic the dog on you.’ … I never saw nothing like this in my life close up with no one, you know. And it hurts. It really hurts, man, especially to happen to a young man like that, man. You know, a good, honest young man.”

Toledo-Martinez was hospitalized and received stitches for gruesome injuries. His attorney says ICE delayed his medical care for several hours and that a prescription for antibiotics was never filled by staff at the Northwest ICE Processing Center in Tacoma, where he’s been held since his arrest. In a statement, Senator Murray said, “This should shock the conscience of every one of us. I do not want to live in an America where federal agents can sic attack dogs on peaceful residents with impunity and face no consequences.”

California’s DOJ Announces New Online Portal for Public to Document Unlawful ICE Activity

Dec 09, 2025

California’s Department of Justice has announced a new online portal for members of the public to share videos, photos and other evidence documenting unlawful activity by ICE agents. This follows similar efforts in other states, including Illinois and New York. Meanwhile, the makers of the smartphone app ICEBlock have sued the Trump administration on First Amendment grounds, after the Justice Department pressured Apple to remove its software from its app store. Before Apple banned the software in October, ICEBlock allowed users to anonymously track reported sightings of ICE agents.

Dozen Former FBI Agents File Lawsuit Accusing Kash Patel of Unlawfully Firing Them

Dec 09, 2025

Image Credit: Jose Luis Magana/AP

A dozen former FBI agents filed a lawsuit accusing FBI Director Kash Patel and other officials of unlawfully firing them for kneeling during a 2020 protest after the death of George Floyd. The lawsuit claims that the agents knelt to “defuse a volatile situation, not as an expressive political act.” Meanwhile, Patel reportedly yelled at agents on the security detail for his girlfriend to drive her allegedly drunken friend home. This comes as a leaked report compiled by retired and active-duty FBI special agents and analysts called the agency under Patel a “rudderless ship” and “chronically under-performing.”

Paramount Skydance Launches Hostile Takeover Bid for Warner Bros. Discovery

Dec 09, 2025

Paramount Skydance has launched a nearly $78 billion hostile takeover offer for Warner Bros. Discovery, just days after Warner Bros. accepted a $72 billion deal from Netflix. Paramount said it has secured funding commitments from the sovereign wealth funds of Saudi Arabia, Abu Dhabi and Qatar, along with support from Affinity Partners, the private equity firm run by Jared Kushner, President Trump’s son-in-law. President Trump reportedly favors Paramount to acquire Warner Bros. Discovery, and remarked over the weekend that he’ll intervene in the federal review process of Netflix’s proposed deal. We’ll have more on this story later in the broadcast.

ProPublica: Trump’s Mortgages Match His Description of Mortgage Fraud

Dec 09, 2025

President Trump has accused his political enemies, including Federal Reserve Governor Lisa Cook, of mortgage fraud for claiming more than one primary residence on her loans. Now a ProPublica investigation finds that President Trump did the same thing in the 1990s, when he took out two Florida mortgages and claimed that each home would be his main residence. According to ProPublica, President Trump never lived in the two Florida houses and instead used them as rental properties. Kathleen Engel, a Suffolk University law professor and leading expert on mortgage finance, told ProPublica, “Given Trump’s position on situations like this, he’s going to either need to fire himself or refer himself to the Department of Justice. Trump has deemed that this type of misrepresentation is sufficient to preclude someone from serving the country.”

Clashes Between Thailand and Cambodia Erupt Again After Trump-Brokered Ceasefire

Dec 09, 2025

Fighting between Thailand and Cambodia has erupted again after Thailand launched airstrikes Monday along its disputed border with Cambodia. A Thai soldier and four Cambodian civilians were killed in the renewed fighting, as both sides accuse each other of breaching a ceasefire deal brokered by President Trump back in October. Earlier this year, at least 48 people were killed and 300,000 were forced to flee their homes in the five-day conflict. This is the spokesperson from the Cambodian Defense Ministry.

Gen. Maly Socheata : “The second invasion activity by the Thai side shows clearly their intention to grab their neighbor’s land using a unilateral map and using force to change borders.”

Longtime Peace Activist Cora Weiss Dies at Age 91

Dec 09, 2025

Image Credit: Reuters

Here in New York, the longtime peace activist Cora Weiss has died at the age of 91, after decades of advocacy demanding civil rights, nuclear disarmament, gender equality and the abolition of war. In the 1960s, Cora Weiss was a national leader of Women Strike for Peace, which played a major role in bringing about the end of nuclear testing in the atmosphere. She organized protests against the Vietnam War and served as president of the Hague Appeal for Peace. She was nominated for a Nobel Peace Prize multiple times. Cora Weiss also served for decades on the board of Downtown Community Television. She last appeared on Democracy Now! in 2022.

Cora Weiss : “Climate change and nuclear weapons are the apocalyptic twins. And we have to prevent one and get rid of the other. We have to abolish nuclear weapons immediately. There should be no question about it anymore. They’re too dangerous and unnecessary. And who wants to destroy the world and the lives of everybody in it?”

Cora Weiss’s husband, Peter Weiss, the well-known human rights attorney, died several weeks ago just shy of his 100th birthday. Cora Weiss died yesterday on Peter Weiss’s 100th birthday.

The original content of this program is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License . Please attribute legal copies of this work to democracynow.org. Some of the work(s) that this program incorporates, however, may be separately licensed. For further information or additional permissions, contact us.

Go proposal: Secret mode

Lobsters
antonz.org
2025-12-09 12:37:15
Comments...
Original Article

Part of the Accepted! series, explaining the upcoming Go changes in simple terms.

Automatically erase used memory to prevent secret leaks.

Ver. 1.26 • Stdlib • Low impact

Summary

The new runtime/secret package lets you run a function in secret mode . After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable.

secret.Do(func() {
    // Generate a session key and
    // use it to encrypt the data.
})

This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it.

The package is experimental and is mainly for developers of cryptographic libraries, not for application developers.

Motivation

Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy.

In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it.

The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds.

Description

Add the runtime/secret package with Do and Enabled functions:

// Do invokes f.
//
// Do ensures that any temporary storage used by f is erased in a
// timely manner. (In this context, "f" is shorthand for the
// entire call tree initiated by f.)
//   - Any registers used by f are erased before Do returns.
//   - Any stack used by f is erased before Do returns.
//   - Any heap allocation done by f is erased as soon as the garbage
//     collector realizes that it is no longer reachable.
//   - Do works even if f panics or calls runtime.Goexit. As part of
//     that, any panic raised by f will appear as if it originates from
//     Do itself.
func Do(f func())
// Enabled reports whether Do appears anywhere on the call stack.
func Enabled() bool

The current implementation has several limitations:

  • Only supported on linux/amd64 and linux/arm64. On unsupported platforms, Do invokes f directly.
  • Protection does not cover any global variables that f writes to.
  • Trying to start a goroutine within f causes a panic.
  • If f calls runtime.Goexit , erasure is delayed until all deferred functions are executed.
  • Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act.
  • If f panics, the panicked value might reference memory allocated inside f . That memory won't be erased until (at least) the panicked value is no longer reachable.
  • Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.

The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100] ), don't create a pointer to that location (don't create a pointer p to &data[100] ). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.

The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use secret.Do behind the scenes.

As of Go 1.26, the runtime/secret package is experimental and can be enabled by setting GOEXPERIMENT=runtimesecret at build time.

Example

Use secret.Do to generate a session key and encrypt a message using AES-GCM:

// Encrypt generates an ephemeral key and encrypts the message.
// It wraps the entire sensitive operation in secret.Do to ensure
// the key and internal AES state are erased from memory.
func Encrypt(message []byte) ([]byte, error) {
    var ciphertext []byte
    var encErr error

    secret.Do(func() {
        // 1. Generate an ephemeral 32-byte key.
        // This allocation is protected by secret.Do.
        key := make([]byte, 32)
        if _, err := io.ReadFull(rand.Reader, key); err != nil {
            encErr = err
            return
        }

        // 2. Create the cipher (expands key into round keys).
        // This structure is also protected.
        block, err := aes.NewCipher(key)
        if err != nil {
            encErr = err
            return
        }

        gcm, err := cipher.NewGCM(block)
        if err != nil {
            encErr = err
            return
        }

        nonce := make([]byte, gcm.NonceSize())
        if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
            encErr = err
            return
        }

        // 3. Seal the data.
        // Only the ciphertext leaves this closure.
        ciphertext = gcm.Seal(nonce, nonce, message, nil)
    })

    return ciphertext, encErr
}

Note that secret.Do protects not just the raw key, but also the cipher.Block structure (which contains the expanded key schedule) created inside the function.

This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work.

𝗣 21865 • 𝗖𝗟 704615 • 👥 Daniel Morsing , Dave Anderson , Filippo Valsorda , Jason A. Donenfeld , Keith Randall , Russ Cox

★ Subscribe to keep up with new posts.

No ARIA is better than Bad ARIA

Lobsters
www.w3.org
2025-12-09 12:29:20
Comments...
Original Article
Read Me First

No ARIA is better than Bad ARIA

Functionally, ARIA roles, states, and properties are analogous to a CSS for assistive technologies. For screen reader users, ARIA controls the rendering of their non-visual experience. Incorrect ARIA misrepresents visual experiences, with potentially devastating effects on their corresponding non-visual experiences.

Before using ARIA or any of the guidance in this document, please take time to understand the following two essential principles.

Principle 1: A role is a promise

This code:

<div role="button">Place Order</div>

Is a promise that the author of that <div> has also incorporated JavaScript that provides the keyboard interactions expected for a button. Unlike HTML input elements, ARIA roles do not cause browsers to provide keyboard behaviors or styling.

Using a role without fulfilling the promise of that role is similar to making a "Place Order" button that abandons an order and empties the shopping cart.

One of the objectives of this guide is to define expected behaviors for each ARIA role.

Principle 2: ARIA Can Both Cloak and Enhance, Creating Both Power and Danger

The information assistive technologies need about the meaning and purpose of user interface elements is called accessibility semantics. From the perspective of assistive technologies, ARIA gives authors the ability to dress up HTML and SVG elements with critical accessibility semantics that the assistive technologies would not otherwise be able to reliably derive.

Some of ARIA is like a cloak; it covers up, or overrides, the original semantics or content.

<a role="menuitem">Assistive tech users perceive this element as an item in a menu, not a link.</a>
  <a aria-label="Assistive tech users can only perceive the contents of this aria-label, not the link text">Link Text</a>

On the other hand, some uses of ARIA are more like suspenders or belts; they add meaning that provides essential support to the original content.

<button aria-pressed="false">Mute</button>

This is the power of ARIA. It enables authors to describe nearly any user interface component in ways that assistive technologies can reliably interpret, thus making components accessible to assistive technology users.

This is also the danger of ARIA. Authors can inadvertently override accessibility semantics.

<table role="log">
  <!--
    Table that assistive technology users will not perceive as a table.
    The log role tells browser this is a log, not a table.
  -->
</table>
<ul role="navigation">
  <!-- This is a navigation region, not a list. -->
  <li><a href="uri1">nav link 1</a></li>
  <li><a href="uri2">nav link 2</a></li>
  <!-- ERROR! Previous list items are not in a list! -->
</ul>

Browser and Assistive Technology Support

Testing assistive technology interoperability is essential before using code from this guide in production. Because the purpose of this guide is to illustrate appropriate use of ARIA 1.2 as defined in the ARIA specification, the design patterns, reference examples, and sample code intentionally do not describe and implement coding techniques for working around problems caused by gaps in support for ARIA 1.2 in browsers and assistive technologies. It is thus advisable to test implementations thoroughly with each browser and assistive technology combination that is relevant within a target audience.

Similarly, JavaScript and CSS in this guide is written to be compatible with the most recent version of Chrome, Firefox, and Safari at the time of writing.

Except in cases where the ARIA Working Group and other contributors have overlooked an error, examples in this guide that do not function well in a particular browser or with a specific assistive technology are demonstrating browser or assistive technology bugs. Browser and assistive technology developers can thus utilize code in this guide to help assess the quality of their support for ARIA 1.2.

Mobile and Touch Support

Currently, this guide does not indicate which examples are compatible with mobile browsers or touch interfaces. While some of the examples include specific features that enhance mobile and touch support, some ARIA features are not supported in any mobile browser. In addition, there is not yet a standardized approach for providing touch interactions that work across mobile browsers.

More guidance about touch and mobile support is planned for future releases of the guide.

Mazda suitcase car, a portable three-wheeled vehicle that fits in the luggage

Hacker News
www.designboom.com
2025-12-09 12:20:24
Comments...
Original Article

Portable mazda suitcase car for airports and travels

Back in the early 1990s, Mazda built a suitcase car, a portable three-wheeled vehicle for airports that fits inside hard-shell luggage. A project coming from an internal contest called Fantasyard between 1989 and 1991, the concept automobile was built by seven of the company’s engineers from their manual transmission testing and research unit. They wanted a vehicle to move around airports faster, so the team bought a pocket bike and the largest hard-shell Samsonite suitcase, size 57 cm by 75 cm. They used parts from the pocket bike, including its 33.6 cc two-stroke engine that produces 1.7 PS. The handlebars went inside the suitcase, the rear wheels attached to the outside of the case, and the front wheel came through a removable hatch in the front.

Assembling the portable Mazda suitcase car could take around a minute. Workers turned the front wheel to an upright position through the removable section, and they inserted the rear wheels. Then, they attached the seat above the rear axle. In the end, the vehicle weighed 32 kilos while the engine pushed it to a top speed of 30 km/h, or 19 mph. The concept automobile shared traits with earlier Mazda vehicles because it had three wheels, like the Mazda-Go from 1931, which was a motor rickshaw sold in Japan. Then, there’s the low center of gravity, which was found in the previous MX-5 roadster. So far, the portable Mazda suitcase car has never made it to production.

mazda suitcase car portable
all images courtesy of Mazda UK

Two built versions, with the US one still existing

The early 1990s marked changes at Mazda, as the company faced high demand for its MX-5 roadster. In 1991, Mazda became the first Japanese brand to win the 24 Hours of Le Mans race with a rotary-engined car, the 787B. That same year, Mazda showed a hydrogen-powered rotary concept named HR-X. The company ran Fantasyard, where teams from different departments competed to create mobility ideas, and engineers had small budgets for their projects. It is from this event that the portable Mazda suitcase car came to fruition as a concept automobile.

During its time, it received so much media attention that the company even built two versions (US and Europe). The European model appeared at the 1991 Frankfurt International Motor Show next to the 787B racer, but the original prototype got destroyed by accident months after the Fantasyard event. The US model still exists (likely owned by a collector), while the European one is missing. While the company never produced it, the portable Mazda suitcase car showcased a design direction for the company, one that focuses on small, practical mobility.

mazda suitcase car portable
the concept automobile could fit inside a hard-shell Samsonite suitcase

mazda suitcase car portable
the handlebars went inside the suitcase, and the rear wheels were attached to the outside of the case

mazda suitcase car portable
the front wheel came through a removable hatch in the front

view of the concept vehicle's rear
view of the concept vehicle’s rear

project info:

name: Suitcase car

car manufacturer: Mazda | @mazda_uk

AI vs. Human Drivers

Schneier
www.schneier.com
2025-12-09 12:07:53
Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”: In medical research, there’s a practice of ending a study ...
Original Article

Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times . In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:

In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.

There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than 39,000 Americans died in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.

The other is a soon-to-be-published book: Driving Intelligence: The Green Book . The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:

There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not, senso stricto , a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.

I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….

Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.

This paper, from 2016, argues that we’re going to need other metrics than side-by-side comparisons: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? “:

Abstract : How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—­an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive­—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.

Tags: , ,

Posted on December 9, 2025 at 7:07 AM 0 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.

Quoting Claude

Simon Willison
simonwillison.net
2025-12-09 12:04:43
I found the problem and it's really bad. Looking at your log, here's the catastrophic command that was run: rm -rf tests/ patches/ plan/ ~/ See that ~/ at the end? That's your entire home directory. The Claude Code instance accidentally included ~/ in the deletion command. — Claude, after Cla...
Original Article

I found the problem and it's really bad. Looking at your log, here's the catastrophic command that was run:

rm -rf tests/ patches/ plan/ ~/

See that ~/ at the end? That's your entire home directory. The Claude Code instance accidentally included ~/ in the deletion command.

Claude , after Claude Code deleted most of a user's Mac

After the Bubble

Hacker News
www.tbray.org
2025-12-09 12:02:02
Comments...
Original Article

The GenAI bubble is going to pop. Everyone knows that. To me, the urgent and interesting questions are how widespread the damage will be and what the hangover will feel like. On that basis, I was going to post a link on Mastodon to Paul Krugman’s Talking With Paul Kedrosky . It’s great, but while I was reading it I thought “This is going to be Greek to people who haven’t been watching the bubble details.” So consider this a preface to the Krugman-Kedrosky piece. If you already know about the GPU-fragility and SPV-voodoo issues, just skip this and go read that.

Depreciation · When companies buy expensive stuff, for accounting purposes they pretend they haven’t spent the money; instead they “depreciate” it over a few years. That is to say, if you spent a million bucks on a piece of gear and decided to depreciate it over four years, your annual financials would show four annual charges of $250K. Management gets to pick your depreciation period, which provides a major opening for creative accounting when the boss wants to make things look better or worse than they are.

Even when you’re perfectly honest it can be hard to choose a fair figure. I can remember one of the big cloud vendors announcing they were going to change their fleet depreciation from three to four years and that having an impact on their share price.

Depreciation is orderly whether or not it matches reality: anyone who runs a data center can tell you about racks with 20 systems in them that have been running fine since 2012. Still, orderly is good.

In the world of LLMs, depreciation is different. When you’re doing huge model-building tasks, you’re running those expensive GPUs flat out and red hot for days on end. Apparently they don’t like that, and flame out way more often than conventional computer equipment. Nobody who is doing this is willing to come clean with hard numbers but there are data points, for example from Meta and (very unofficially) Google .

So GPUs are apparently fragile. And they are expensive to run because they require huge amounts of electricity. More, in fact, than we currently have, which is why electrical bills are spiking here and there around the world.

Why does this matter? Because when the 19th-century railway bubble burst, we were left with railways. When the early-electrification bubble built, we were left with the grid. And when the dot-com bubble burst, we were left with a lot of valuable infrastructure whose cost was sunk, in particular dark fibre. The AI bubble? Not so much; What with GPU burnout and power charges, the infrastructure is going to be expensive to keep running, not something that new classes of application can pick up and use on the cheap.

Which suggests that the post-bubble hangover will have few bright spots.

SPVs · This is a set of purely financial issues but I think they’re at the center of the story.

It’s like this. The Big-Tech giants are insanely profitable but they don’t have enough money lying around to build the hundreds of billions of dollars worth of data centers the AI prophets say we’re going to need. Which shouldn’t be a problem; investors would line up to lend them as much as they want, because they’re pretty sure they’re going to get it back, plus interest.

But apparently they don’t want to borrow the money and have the debts on their balance sheet. So they’re setting up “Special Purpose Vehicles”, synthetic companies that are going to build and own the data centers; the Big Techs promise to pay to use them, whether or not genAI pans out and whether or not the data centers become operational. Somehow, this doesn’t count as “debt”.

The financial voodoo runs deep here. I recommend Matt Levine’s Coffee pod financing and the Financial Times’ A closer look at the record-smashing ‘Hyperion’ corporate bond sale . Levine’s explanation has less jargon and is hilarious; the FT is more technical but still likely to provoke horrified eye-rolls.

If you think there’s a distinct odor of 2008 around all this, you’d be right.

If the genAI fanpholks are right, all the debt-only-don’t-call-it-that will be covered by profits and everyone can sleep sound. Only it won’t. Thus, either the debts will apply a meat-axe to Big Tech profits, or (like 2008) somehow they won’t be paid back. If whoever’s going to bite the dust is “too big to fail”, the money has to come from… somewhere? Taxpayers? Pension funds? Insurance companies?

Paul K and Paul K · I think I’ve set that piece up enough now. It points out a few other issues that I think people should care about. I have one criticism: They argue that genAI won’t produce sufficient revenue from consumers to pay back the current investment frenzy. I mean, they’re right, it won’t, but that’s not what the investors are buying. They’re buying the promise, not of more revenue , but of higher profits that happen when tens of millions of knowledge workers are replaced by (presumably-cheaper) genAI.

I wonder who, after the loss of those tens of millions of high-paid jobs, are going to be the consumers who’ll buy the goods that’ll drive the profits that’ll pay back the investors. But that problem is kind of intrinsic to Late-stage Capitalism.

Anyhow, there will be a crash and a hangover. I think the people telling us that genAI is the future and we must pay it fealty richly deserve their impending financial wipe-out. But still, I hope the hangover is less terrible than I think it will be.


Unofficial language server for gren, in rust

Lobsters
github.com
2025-12-09 11:59:47
Comments...
Original Article

Unofficial LSP language server for gren , focusing on performance and reliability. To use, install rust and

cargo +nightly install --git https://github.com/lue-bird/gren-language-server-unofficial

Then point your editor to gren-language-server-unofficial , see also specific setups .

You can also set their paths in the language server settings:

  • gren-language-server-unofficial.grenPath: string : compiler executable, default "gren" . If the language server can't find it in the $PATH , please set this option to the path that which gren prints :)
  • gren-language-server-unofficial.grenFormatPath: "builtin" | string : formatter executable, default "builtin" . "builtin" is a fast, unofficial rust formatter

editor setups

feel free to contribute, as I only use vscodium

vscode-like

pre-built

  1. download https://github.com/lue-bird/gren-language-server-unofficial/blob/main/vscode/gren-language-server-unofficial-0.0.1.vsix
  2. open the command bar at the top and select: >Extensions: Install from VSIX

build from source

  1. clone this repo
  2. open vscode/
  3. run npm run package to create the .vsix
  4. open the command bar at the top and select: >Extensions: Install from VSIX

server only

There is no built-in language server bridge as far as I know but you can install an extension like vscode-generic-lsp-proxy that will work for any language server. Then add a .vscode/lsp-proxy.json like

[
  {
    "languageId": "gren",
    "command": "gren-language-server-unofficial",
    "fileExtensions": [
      ".json",
      ".gren"
    ]
  }
]

helix

write to ~/.config/helix/languages.toml :

[language-server.gren-language-server-unofficial]
command = "gren-language-server-unofficial"
[[language]]
name = "gren"
scope = "source.gren"
injection-regex = "gren"
roots = ["gren.json"]
file-types = ["gren", "json"]
comment-token = "--"
block-comment-tokens = { start = "{-", end = "-}" }
indent = { tab-width = 4, unit = "    " }
language-servers = [ "gren-language-server-unofficial" ]
auto-format = true

not planned

  • type inference
  • directly integrating test running and similar
  • codelens, workspace symbols, code folding, linked editing
  • gren.json help

not sure (Please give me feedback on this)

  • support renaming punned record fields by adding { originalName = newName }
  • show all module exposes when hovering (..) (only if I have time and there is interest)
  • add code actions like "expose (including variants)", "inline", "inline all uses" (leaning towards no as it is fairly complicated, though it is very useful for sure)
  • show function parameter names (leaning towards no, as they are often confusing if they are curried, reveal non-exposed variant patterns, have more parameters than the type suggests, are undescriptive etc)
  • currently, an exposed member will still be suggested even when a local module-declared reference/local binding with the same name exists. Likewise, a local module-declared reference will still be suggested even when a local binding with the same name exists. (somewhat easily fixable but I don't really see the harm in directly showing this shadowing in your face)
  • your idea 👀

known limitations

  • It is possible that an gren module belongs to multiple projects when source directory paths overlap between projects. This throws a wrench in pretty much all existing code (likely internal document source desync and a more limited lsp feature range in one of the containing projects). This situation is, I assume, fixable by special-casing their storage and handling but it would require a lot of work

setup for developing

Rebuild the project with

Then point your editor to the created ???/target/debug/gren-language-server-unofficial .

log of failed optimizations

  • switching to mimalloc, ~>25% faster (really nice) at the cost of 25% more memory consumption. Might be worth for some people but I'm already worried about our memory footprint!
  • declarations.shrink_to_fit(); saves around 0.6% of memory at the cost of a bit of speed
  • upgrading lto to "thin" to "fat" both improve runtime speed by ~13% compared to the default (and reduce binary size) but increase build time by about 30% (default to thin) and 15% (thin to fat). As this prolongs installation and prevents people from quickly trying it, the default is kept. If this language server get distributed as a binary or people end up using this language server a lot, this "thin" might become a reasonable trade-off.

optimizations to try

  • reparse incrementally (somewhat easy to implement but somehow it's for me at least pretty much fast enough already without? More data points welcome)
  • switch to position_encoding: Some(lsp_types::PositionEncodingKind::UTF8) . This makes source edits and parsing easier and faster at the cost of compatibility with lsp clients below version 3.17.0. Is that acceptable? (leaning towards yes). Also validate if gren --report region column is UTF-8 or UTF-16 (seems to be UTF-16 strangely)
  • if memory consumptions turns out to be a problem, stop storing the source in memory and request full file content on each change (potentially only for dependencies). This adds complexity and is slower so only if necessary.
  • in syntax tree, use separate range type for single-line tokens like keywords, symbols, names etc to save on memory consumption
  • switch most syntax tree Box<str> s to https://docs.rs/smallstr/0.3.1/smallstr/ to for example speed up collecting references (e.g. for rename)
  • in syntax tree, use Box<[]> instead of Vec for common nodes like call arguments
  • on init, read modules in parallel, not just projects, to even out difference in project size (seems not worth using threads, maybe something more lightweight?)

Why frozen test fixtures are a problem on large projects and how to avoid them

Hacker News
radanskoric.com
2025-12-09 11:58:02
Comments...
Original Article

Tests grow to thousands
All make their claim on fixtures
Frozen by demands

An ancient Japanese Haiku about a common problem with software test fixtures

Act 1: The problem, frozen fixtures

Fixtures have a lot going for them: super fast, clearly structured, reusable across tests …

That last one is also the source of a common problem in large test suites. Every time you change fixtures you risk falsely breaking some tests. Meaning: the test fails even though the feature it tests still works . This is because every test makes assumptions about the fixtures. This is necessarily part of the test setup, even if it is not explicit in the test code. If the code breaks those assumptions the test itself will no longer work. The more tests there are, the more likely you are to falsely break some of them when you change fixtures.

This is why sufficiently complex data model fixtures tend to become frozen after a certain number of tests. If you aren’t careful, when you get to 1000s of tests, making any change to fixtures can break 10s or even 100s of unrelated tests. It becomes really hard to fix them so you try to avoid directly modifying the fixtures at all. You start to work around it (more on that below) and they stop changing. Hence, frozen fixtures.

Thankfully, there are ways to write tests to minimise this effect but it requires discipline.

Act 2: The bad solutions

First, let me go over 2 approaches I’ve seen on projects and why I think they’re bad:

  1. If current fixtures can’t be reused, create new ones. This is especially prominent in multi-tenant applications: create a brand new tenant in fixtures just for the new tests you’re adding. This is a road of ever increasing fixture size. It becomes really hard to understand which fixtures are for which tests and the testing database starts to become larger and larger. Reviewing existing fixtures for reuse becomes harder. It becomes easier to just add new fixtures for the next test which makes the problem worse.
  2. Use code inside the test to modify the fixture records just for this test. It seems obvious: let’s just modify the DB to match the state we need for the test. Congratulations! You’ve started to re-discover factories, except you’re doing it ad-hoc. If you start going down that road, consider using both fixtures and factories. I’m not being sarcastic, this combination can work really well.

Act 3: The right solution

First, recognise that every test is written to test a specific property of the code. It should be red if the property breaks and green if satisfied. Diverging from it in any direction is bad, in different ways:

  1. A test that passes while the property breaks gives us false confidence. That’s obviously bad because we could ship a bug.
  2. However, a test that breaks while the property holds distracts us with false information. That’s also bad because it is wasting our precious development time and reducing our confidence in our test suite.

To put it zenly 1 : a test should test only that which it is meant to test, no more and no less .

A great solution to remedy frozen fixtures is turning this principle to 11 2 .

Test only what you want to test

This means getting into the habit of asking yourself what a specific test actually tries to test. Then, write the test code to directly test exactly that property. This is not trivial but it becomes effortless with practice. Writing good tests is a skill that needs practice like any other programming skill .

I know that this still sounds abstract so here are 2 very concrete examples.

Example 1: Testing collection content

Testing collections is especially problematic because it has to involve multiple records. This means you’re probably using fixture records that are also used in many other tests. Either that or your fixtures list is crazy long.

Let’s say you are testing a scope on a model. You might be tempted to write something like:

1
2
3
test "active scope returns active projects" do
    assert_equal [projects(:active1), projects(:active2)], Project.active
end

This test has just made it impossible to introduce another active project without breaking it, even if the scope was not actually broken. Add a new variant of an active project for an unrelated test and now you have to also update this test.

Instead, try this:

1
2
3
4
5
6
test "active scope returns active projects" do
    active_projects = Project.active
    assert_includes active_projects, projects(:active1)
    assert_includes active_projects, projects(:active2)
    refute_includes active_projects, projects(:inactive)
end

The test will now:

  1. Fail if the scope no longer includes active projects.
  2. Fail if the scope now includes inactive projects.
  3. Not be affected when new projects are added to fixtures.

This last one is key. By slightly rewriting the test, we’ve avoided freezing fixtures.

Example 2: Testing collection order

A related example is checking that a returned collection is in the correct order.

You might be tempted to do something like this:

1
2
3
test "ordered sorts by project name" do
  assert_equal Project.ordered, [projects(:aardvark), projects(:active1), projects(:inactive)]
end

Instead, think like a zen master: to test sorting, test that it is sorted :

1
2
3
4
test "ordered sorts by project name" do
  names = Project.ordered.map(&:name)
  assert_equal names, names.sort
end

The test will now:

  1. Fail if the collection is not sorted.
  2. Not be affected by any other change.

To test a specific case of ordering, focus the test even more and only test that very specific ordering. For example, imagine you just fixed a bug where non latin characters were incorrectly sorted and you want to add a regression test. Do it this way:

1
2
3
4
5
6
test "ordered correctly sorts non latin characters" do
  # Č and Ć are non latin letters of the Croatian alphabet and unfortunately
  # their unicode code points are not in the same order as they are in the
  # alphabet, leading vanilla Ruby to sort them incorrectly.
  assert_equal [projectĆ, projectČ], Topic.ordered & [projectČ, projectĆ]
end

The test will now:

  1. Fail if the non latin characters are incorrectly sorted.
  2. Not be affected by any other change in sorting logic.

Rewriting the test slightly made it both more precise and not freeze the fixtures.

Act 4: So … this makes fixtures better than factories?

Now that you know how to minimise fixtures’ downsides without sacrificing any of the benefits, surely, this means they’re better than factories? Right?

Fixtures vs factories is one of those topics that you really wouldn’t expect people to have strong feelings about but somehow they do. I like to irritate people by being pragmatic and not picking a side.

Sometimes I use fixtures sometimes factories. They have different tradeoffs and each could fit a different project better.

Sometimes I decide to go wild and use both, because that way I can annoy everyone at once!

Which is why I didn’t write an article about which one is better, enough digital ink has been spilled on that hill. I did write before about a principle that makes factories easier to use , if that is something you’re interested in.

An apologia for lazy evaluation

Lobsters
epicandmonicisnotiso.blogspot.com
2025-12-09 11:54:52
Comments...
Original Article

As a programming language, laziness is the definitive feature of Haskell. Every other quality such as: higher order functions, Hindler-Milner type system or being lambda calculus based is present on other languages. In fact Haskell was born as a committee initiated language to avoid a proliferation of non-strict languages at the end of the 80's, putting all the effort before the same cart.

If you take some time, and look at what HN , stack overflow or reddit has to say about the general sentiment of non Haskell programmer on laziness, it could be resumed on a single word: anxiety . It seems that space leaks due to laziness are unavoidable and that Haskell programmers choose to live with them. The methods of detecting them come from dynamic analysis of compiled programs . There is no general advice on how to avoid them apart from experience; an even then it not enough, as expert Haskell programmers still trip over them time to time. What hope do new Haskell programmer have then?

I think this sentiment can be resumed on two general statement that I have seen pop on the web:

  1. Lazy/non-strict evaluation on functional programming languages will unavoidably lead to space leaks and consequently general performance degradation. You cannot rely on production with this system. Strict functional languages are free from these concerns and should be preferred.

  2. Laziness by default is a mistake, but we can recover the benefits of laziness with an explicit iterator concept as in rust or python. No surprising space leaks that way.

I think these are strong man versions of the statements I have seen. It is not my intention straw man my way on apologetics. But before discussing these points we need to talk about value systems.

Difference on values separate programming languages communities

There was a presentation done some years ago on a PL conference that made the following statement:

The real distinction between programming languages and their communities come from different orders on their value systems

I did not find the video on Youtube nor Google (has google gotten worse?). If someone knows it, please send me a message. (edit: It was the great Bryan Cantrill as always )

So for example, what separates the C++ community from the other languages communities? the extreme emphasis on performance. That is why there is so much emphasis on zero cost abstractions. Lisp communities care more about syntactic extensibility than performance, etc.

Haskell can be plenty of fast if you put your effort, but those Haskell programs won't look like the normal ones that you encounter. Take a look at the Haskell entries on the languages shootout benchmark. These are written using mutable arrays and explicit recursion everywhere. The pass to a C-like version is almost clear on most cases.

For the Haskell community, having ultimate single core performance on the common code is not a priority, we prefer to have compositional and correct/reliable code than fast code. With this in mind, we can approach the questions on the previous section.

Is not the presence of space leaks a matter of correctness and reliability too?

That might seem the case from afar, but once you start writing Haskell and start experimenting these space leaks, you will notice that:

  1. 90% of the space leaks you write end up adding a tiny amount of memory usage to your functions, mostly unnoticeable. Think thunks like (1 + 2) that are subjected to demand analysis under optimization.

  2. 1-2% of them are serious enough to require profiling your code with cost-centres.

There is a power distribution on the amount of memory you will use on these space leaks, thus for the grand majority of the space leaks you will write, they are a performance concern, not a correctness/reliability concern .

It is not a natural law that the distribution of intensity of space leaks is like this, it has required countless hours of profiling and optimization on part of GHC developers. The presence of a demand analyser and fusion optimizations on GHC pushes the space leaks of category (b) to be even more rare. But is true that for the remaining space leaks of category (b), learning how to profile and insert cost centres is a necessity, sorry.

In resume: space leaks are mostly a performance concern and for the rare cases it is not, you need to learn to profile to manage them. The Haskell community lives with it, because its main focus is not performance loss but the compositional benefits .

The compositional benefits

In a strict functional languages, you program differently than in lazy ones. On a strict FP language, you end up writing more explicitly recursive functions (hopefully with tail calls) than using higher order functions that express your intent . And if you use higher order functions, you need to take in consideration if the function is tail recursive, otherwise you can end with stack safety issues . On lazy languages, your intent is better expressed by higher order functions, as your can rely on guarded (co)-recursion and laziness to avoid stack issues. This helps with the correctness goal.

But more than that, you end up with compositional benefits. In a lazy language, function composition (either with (.) or with function calls) ends up with the nice property of the consumer of the pipeline driving the reduction instead of the producer as in a strict language . And in a pure languages with inductive data types, you will end up with a bunch of tiny pipelines . Let's work with the following example to see what this buys us:

let y = (f . g . h . i) x

On a strict language, evaluation of this would proceed by having the output of i x fully evaluated as a inductive data type, then pass that complete structure at one to h which would repeat that process until reaching f . Even if f could short circuit the evaluation given an edge case, it has to consume all the intermediate values to be able to do so.

On a lazy language, evaluation proceeds by demands and data dependencies . If we want the value y , we need to reduce the pipeline and we need f to produce a value. That will probably introduce a demand on the results of g , but that could be not the case: f could be satisfied with just part of the inductive data type.

This has a bunch of interesting consequences:

  • For library authors, you can provide big product data types and let the consumers pick the correct subparts they care about. This is what people mean with "lazy languages have products, strict ones sums".
  • You can define new control structures controlling which part of the structures you choose to demand. Stuff like the Alternative type class would not be possible without laziness.
  • With inductive data types, the garbage collector has an easier time as it can collect un-needed or already consumed parts of the structure as they come and go. Also, subparts that where never generated (because they where under a thunk) did not add pressure to the nursery of the GC. Less garbage is a win.
  • Some asymptotics are improved on composed code. The well know example of head . sort works on O(n) time.

You can say, so what? don't create the unnecessary parts in the first place, but if you want to collaborate with other programmers, massaging the output of their functions becomes a major concern. Having a system to respect data dependencies becomes a god-send.

So to resume: laziness super charges common composition of functions in ways that it become the primary way you organize your systems in Haskell . A weaker composition of functions would mean that other patterns such as imperative or object oriented programming could be seen as an alternative when organizing the system. This plus being allowed to use higher order functions without extra care is why haskellers prefer lazy evaluation.

What about explicit iterator like in Rust?

These iterators are explicit state machines with a next() function. They are quite nice. As a programming concept though, they are a new concept you should learn. With lazy evaluation every inductive data structure pulls double duty: as a data structure and as a control structure .

So this means that a library author implicitly provides their yield points on each constructor and you the consumer of the library can take them apart. With iterators, this process is explicit in the .iter() or .into_iter() functions.

On a imperative language with statements, this is actually pretty great. But on a functional languages with just expressions and pure function, I prefer the merge of concern that have inductive (co)data types. This is a whole dimension I just don't have to care about.

Although there is a nitpick on here: in presence of effects, you also need a streaming library for functional languages. That is a separate matter.

But what about those space leaks that affect the correctness of a program?

Most of my Haskell code end ups being pipelines even if it is not written explicitly as a bunch of (.) everywhere. I have encountered a pattern I call "being a good consumer" that help to avoid the major space leaks I have encountered of type (b). On a next post I will discuss this concept, link to some PR that implement fixes proposed by it.

Microsoft investigates Copilot outage affecting users in Europe

Bleeping Computer
www.bleepingcomputer.com
2025-12-09 11:48:39
Microsoft is working to mitigate an ongoing incident that has been blocking users in Europe from accessing the company's AI-powered Copilot digital assistant. [...]...
Original Article

Microsoft Copilot

Microsoft is working to mitigate an ongoing incident that has been blocking users in Europe from accessing the company's AI-powered Copilot digital assistant.

Additionally, some users who can access the affected service may experience degraded functionality with specific features.

"We're investigating an issue in which users in the United Kingdom may be unable to access Microsoft Copilot, or experience degraded functionality with some features," Microsoft said when it acknowledged the issue one hour ago.

After reviewing service monitoring telemetry to isolate the root cause of the outage, Microsoft added that the incident was caused by a capacity scaling issue that is now being addressed.

"Indications from service monitoring telemetry suggest an unexpected increase in traffic has resulted in impact. We're continuing to investigate further to determine the next steps required," Microsoft said .

"We've identified an issue impacting service autoscaling to meet demand. We're manually scaling capacity to improve service availability, and we're monitoring this closely to ensure the expected outcome is achieved."

According to a service alert ( CP1193544 ) seen by BleepingComputer, the outage may impact any user attempting to access Microsoft Copilot within the United Kingdom or Europe.

Microsoft is also tracking a separate incident ( DZ1193516 ) causing some admins to experience errors when accessing Microsoft Defender for Endpoint features, including device inventory and threat analytics. While it has yet to share how many users are affected, the issue has been tagged as an incident in the admin center, a flag usually designating service problems with significant user impact.

One week ago, Microsoft also mitigated an outage that blocked access to some Defender XDR portal capabilities, including threat hunting alerts.

This is a developing story...

tines

Break down IAM silos like Bitpanda, KnowBe4, and PathAI

Broken IAM isn't just an IT problem - the impact ripples across your whole business.

This practical guide covers why traditional IAM practices fail to keep up with modern demands, examples of what "good" IAM looks like, and a simple checklist for building a scalable strategy.

Offline cybersecurity AI using RAG + local LLM (Python, FAISS, Llama 3.1)

Lobsters
gitlab.com
2025-12-09 11:37:14
Built an offline AI assistant for security work in air-gapped environments (SCIFs, classified networks, etc.). Runs entirely local - no API calls, no telemetry. Technical approach: RAG with 360k embedded chunks (sentence-transformers: all-MiniLM-L6-v2) FAISS for vector similarity search Local LLM i...
Original Article

Syd CI/CD Catalog (unpublished)

Loading

Brent's Encapsulated C Programming Rules (2020)

Hacker News
retroscience.net
2025-12-09 11:16:10
Comments...
Original Article

Below are some rules that I have developed over a long period of time writing fully encapsulated C programs. C is my favorite language and I love the freedom and exploration it allows me. I also love that it is so close to Assembly and I love writing assembly for much of the same reasons!

NOTE: You may see references to ‘perfect encapsulation’ throughout. I offer both a ‘performance’ and a ‘pure encapsulation’ approach to C here (first two headers). So feel free to interpret the rest of the rules based on the approach.

JMP

Pure encapsulation

One of the great things about C is that it allows for “pure encapsulation”. What this means is that you can explain all the intent of your code through the header file and the developer who uses your lib/code never has to look at the actual implementations of the code. Now to take this a step further, well all know that C supports the struct keyword to group data, and we can also make the members of a struct hidden completely from the developer using the library. For example, we could declare the following header and C files:

vec3.h

#ifndef VEC3_H
#define VEC3_H

struct Vec3;

#endif

vec3.c

#include "vec3.h"

struct Vec3 {
	float x;
	float y;
	float z;
};

As you can see in the above code sample, if you were just to have the header file, you would not know that this vector is implemented using 3 floats . This is very important for pure encapsulation. With this, you can completely control the behavior of the struct and it’s contents using readable functions and not worry about the developer using the code directly mutating the members of your struct . Now that you’ve created pure encapsulation, you are able to feel safe knowing that developers can’t new up the struct or abuse it’s contents from anywhere other than through the code you’ve written in your c file.

No encapsulation performance

One of the flaws with pure encapsulation is that you can see a drop in performance. Having a bunch of functions to get inner members of a structure also blocks the compiler from optimizing it’s best. Member hiding is not usually because we don’t trust the end developer with the secrets of our structures, but is often so they don’t make mistakes by changing things they shouldn’t. Also member hiding helps so that we can easily update our code without changing the interface that developers rely on.

That being said, if we are dealing with performance critical code, or just want extra optimization by our compiler (and/or to write less code); we can expose the members of our structure. However, let’s be smart about exposing these members so that developers don’t accidentally make mistakes with their new found power.

Enter const , our best friend in this scenario. We can not only mark our members as const before their type, but also after their type. The general rule of thumb to remember is, if it is a pointer, the const goes after the type, otherwise put it before the type. In the simple example below, you can see how pointers have the const after, and the rest have const before their type declaration.

struct Employer {
	char* const name;
	const int years;
};

struct Employee {
	struct Employer* const employer;
	char* const name;
	const int age;
};

In this way we are able to expose the fields of the struct to the rest of the code for compiler optimizations, ease of access, etc; while also being able to prevent developers from directly assigning/changing the values of those fields. The obvious downside to this is that you will need to either create a macro, or manually cast assign the fields to change them in the implementation C file. I would recommend, if you are using C17, to use _Generic and macros so you can create a single #define OVERRIDE(field) type of macro and have the compiler throw if it finds an un-expected type. Of course, if you don’t want to use a macro, you can also create separate inline functions to do the same (just might be harder to manage). Below is an example of how we can tell the compiler we want to explicitly change the value in the implementation c file.

// employee.c file

void employee_set_age(struct Employee* employee, int newAge) {
	// Cast away the const and set it's value, the compiler should optimize this for you
	*(int*)&employee->age = newAge;
}

Memory ownership

With perfect encapsulation you are most of the way towards having good memory ownership. If you purely encapsulate your structs the only way for a developer to create a new instance of the struct would be through functions you create yourself. From this point you can create the new and free functions to manage the memory of your struct. Below is an example building upon the previous code sample.

vec3.h

#ifndef VEC3_H
#define VEC3_H

struct Vec3;

struct Vec3* Vec3_new();
void Vec3_free(struct Vec3* vector);
void Vec3_print(struct Vec3* vector);

#endif

vec3.c

#include "vec3.h"
#include <stdio.h>

struct Vec3 { /* ... */ };

struct Vec3* Vec3_new()
{
	struct Vec3* v = malloc(sizeof(struct Vec3));
	v->x = 0.0F;
	v->y = 0.0F;
	v->z = 0.0F;
	return v;
}

void Vec3_free(struct Vec3* vector)
{
	free(vector);
}

void Vec3_print(struct Vec3* vector)
{
	printf("<%f, %f, %f>", vector->x, vector->y, vector->z);
}

Above you can see that we encapsulate the creation, usage, and freeing of our struct . You would think, well with this, what else do we need to know about memory management? Well there is one more thing, more of a rule that you must follow more than anything else. The thing that declares the memory is the thing that should free the memory . We see this in action above, the c file that creates the memory in turn has a function for freeing the memory.

Now let’s look at another example using a char* to represent a string function. Here we have a function that takes a string and clones it (wrong way):

char* strclone(const char* str)
{
	size_t len = strlen(s) + 1;
	char* clone = malloc(len);
	memcpy(clone, str, len);
	return clone;
}

Now what is wrong with the memory management on this code? Answer, we are using malloc to create memory and then return the string. Let’s take a look at the developer using this.

char* str = "Hello Brent!\0";
char* cpy = strclone(str);
printf(cpy);
free(cpy);	// Allowed?

How is the developer suppose to know that they are to free the char* ? For all they know the strclone uses some-sort of pooling functionality to re-use a pool of memory, we can’t free that otherwise you risk seg-faulting. What is a better version of this?

void strclone(const char* str, char** outCpy)
{
	size_t len = strlen(s) + 1;
	*outCpy = malloc(len);
	memcpy(outCpy, str, len);
}

Now with this version we make it explicit that the developer should manage their own object that they pass in. We use the hint name out as a prefix to the argument name to let them know memory will be allocated for this input variable. What does this look like to the developer?

char* str = "Hello Brent!\0";
char* cpy;
strclone(str, &cpy);
printf(cpy);
free(cpy);

Looking at this version, the developer knows they are in charge of freeing the cpy , this is because they declare the variable in the first place, rather than being assigned from a function. If the developer follows our rule ( The thing that declares the memory is the thing that should free the memory ), they declared the variable/pointer so they should be the ones freeing it. Now I know you can argue all sorts of alternative setups for the return value, but the fact of the matter is that passing in a pointer to a pointer is much more clear of ownership.

Avoid void*

One stigma people have against C is the use of void* , some think it is necessary, some use it to solve problems quickly through the path of least resistance, I say that there are very few cases when void* is acceptable and most of the time your current problem isn’t it. Like NULL , void* is a lazy solution to a problem and causes all kinds of un-necessary runtime checking.

In most cases you should create a struct that explicitly defines what type is accepted or stored. The biggest advantage of this approach is that you put the compiler to work for you. There are all sorts of compile-time checks that will prevent you from doing something you shouldn’t do. Also your IDE will be much more helpful when trying to navigate code as the IDE, nor the compiler, have any idea where a void* comes from or what it points to.

Don’t over-complicate strings

If I want to live in the 2020 era of programming, that means I probably will wind up using more than one library to solve a problem. My new problem is that people think it is cute to typedef char* to some other name and only accept that name in their code. In the era of UTF8, that is completely un-necessary and makes me have to do a lot of senseless casting. If you want to encapsulate that you are using a string (so I don’t know it) then cool, do that, but typedef unsigned char* string is not it. Please stick to the good ol’ char* for strings.

Don’t over-complicate stdlib

TBD

Use utf8 strings

Talking about strings, I’d like to point out that UTF-8 is fully compatible with ASCII, this means we don’t need special functions for special characters or non English characters. All of our usual suspects of functions work on UTF-8 such as fopen ! There are some helpful other things we can use thanks to compilers such as placing u8 in front of an in-line string:

char* utf8 = u8"Hello World!";

So in closing on the UTF-8 topic, please stop using wchar_t , char16_t , and all those other variants (except when you are forced to, due to 3rd party libraries). With that, I’ll leave you with this helper function to get you started:

size_t utf8len(const char* const str) {
    size_t len = 0;
    for (size_t i = 0; *str != 0; ++len) {
        int v0 = (*str & 0x80) >> 7;
        int v1 = (*str & 0x40) >> 6;
        int v2 = (*str & 0x20) >> 5;
        int v3 = (*str & 0x10) >> 4;
        str += 1 + v0 * v1 + v0 * v1 * v2 + v0 * v1 * v2 * v3;
    }
    return len;
}

NOTE 1: A simplified version of this was proposed by Max Brauer on Stack Overflow to be:

size_t utf8len(const char* str) {
    size_t len = 0;
    for (size_t i = 0; *str != 0; ++len) {
        int v01 = ((*str & 0x80) >> 7) & ((*str & 0x40) >> 6);
        int v2 = (*str & 0x20) >> 5;
        int v3 = (*str & 0x10) >> 4;
        str += 1 + ((v01 << v2) | (v01 & v3));
    }
    return len;
}

NOTE 2: This does not validate the utf8 string. I am not fond of making the length function also validate the string, for that we should create a separate method for validation. Using this table I found on Wikipedia we can construct a validation function (also this table was used for the length function).

bool utf8valid(const char* const str) {
    if (str == NULL)
        return false;
    const char* c = str;
    bool valid = true;
    for (size_t i = 0; c[0] != 0 && valid;) {
        valid = (c[0] & 0x80) == 0
            || ((c[0] & 0xE0) == 0xC0 && (c[1] & 0xC0) == 0x80)
            || ((c[0] & 0xF0) == 0xE0 && (c[1] & 0xC0) == 0x80 && (c[2] & 0xC0) == 0x80)
            || ((c[0] & 0xF8) == 0xF0 && (c[1] & 0xC0) == 0x80 && (c[2] & 0xC0) == 0x80 && (c[3] & 0xC0) == 0x80);
        int v0 = (c[0] & 0x80) >> 7;
        int v1 = (c[0] & 0x40) >> 6;
        int v2 = (c[0] & 0x20) >> 5;
        int v3 = (c[0] & 0x10) >> 4;
        i += 1 + v0 * v1 + v0 * v1 * v2 + v0 * v1 * v2 * v3;
        c = str + i;
    }
    return valid;
}

NOTE 3 : Based on the same simplification previously discussed, this would become:

bool utf8valid(const char* const str) {
    if (str == NULL)
        return false;
    const char* c = str;
    bool valid = true;
    for (size_t i = 0; c[0] != 0 && valid;) {
        valid = (c[0] & 0x80) == 0
            || ((c[0] & 0xE0) == 0xC0 && (c[1] & 0xC0) == 0x80)
            || ((c[0] & 0xF0) == 0xE0 && (c[1] & 0xC0) == 0x80 && (c[2] & 0xC0) == 0x80)
            || ((c[0] & 0xF8) == 0xF0 && (c[1] & 0xC0) == 0x80 && (c[2] & 0xC0) == 0x80 && (c[3] & 0xC0) == 0x80);
        int v01 = ((*str & 0x80) >> 7) & ((*str & 0x40) >> 6);
        int v2 = (c[0] & 0x20) >> 5;
        int v3 = (c[0] & 0x10) >> 4;
        i += 1 + ((v01 << v2) | (v01 & v3));
        c = str + i;
    }
    return valid;
}

NOTE 4 : Something that developers trip on is the file encoding for their source file. Be sure you are using UTF-8 encoding for your source file when typing UTF-8 strings directly into your source code. If you use the wrong encoding, the compiler may compile the inline string with the incorrect encoding, even with the u8 prefix.

Don’t use char for memory array, use uint8_t

In making code readable, you should only use char* or unsigned char* for strings (character arrays). If you want a block of bytes/memory pointer, then you should use uint8_t* where uint8_t is part of stdint.h . This makes the code much more readable where memory is represented as an unsighned 8-bit array of numbers (byte array). Now you can trust when you see a char* that it is referring to a UTF-8 (or ASCII) character array (text).

Use standard bool

This one is easy:

Don’t make defines for false , or False , or FALSE and it’s true counterpart, please just use the standard library.

Don’t use static or global variables

So static functions are fine, they are great for breaking up functions to be readable. However, static variables are bad and almost always not needed. Remember that we are living in a world where our CPUs are not getting faster, they are just coming in larger quantities. Always think about threadability and controlling mutation. Even with a variable that is static to a C file and not global, you never know if someone is using threads to call your functions.

Prefer inline over macro

Functions that are inline are much more readable, work better with your IDE code searching, and are much more readable when you get errors/warnings from the compiler. Some macros are great, don’t ban them altogether, but do consider if you can do what you need through an inline function first.

Test your functions

C is beautiful in the fact that you don’t need unit test frameworks to fully test your code. It’s lack of objects and hidden state make it even better for testing. If you create a function, make a test for it and give it your known test case arguments. Did I mention that I love not having to have a big complicated mocking library to fully test my code? If your function takes in a complicated struct for some reason, feel free to define out/in a test function for creating the struct you expect to be testing (do NOT comprimise perfect encapsulation for the sake of testing).

Write functions to do one thing

Okay, this isn’t a C only thing, but make sure your functions are not creating large call stacks. Feel free to use static or static inline local functions to break up the readability of large functions if you just can’t seem to make functions do a single thing (for performance for example).

Don’t write systems, write modular pieces (think UNIX)

Don’t write big complicated systems to cover many problems, even if things are losely related in many ways. It is better to break up your code into useful functional pieces and cleverly put them together to have complex behavior. The beauty of Unix is that you can get many things done through many small programs pieced together. In the same way, you should develop useful functions that can be pieced together through data.

Warnings are errors

This one is a bit short. The idea is simple, warnings are errors.

  1. Make sure ALL warnings are enabled ( /Wall ).
  2. Make sure that you turn on warnings as errors

Note: If you copied some source code from the internet that you need and it is producing warnings, turn it into a lib and use the lib, do not comprimise your code for other people’s un-checked code. You’d be surprised how many popular libraries fail warnings as errors test (often they develop assuming 32-bit code).

If there is a standard, use it

This was touched on before with stdbool.h , but if there is a standard function or type, use it. Use things like int32_t over just int hoping that int will be 32-bit. If there is a standard function for doing something, don’t re-invent the wheel. Wrapping standard functions such as malloc and free I would consider a necessary evil if you are creating tools to detect memory leaks and the like though.

Use float epsilon for 0 checking

First of all, don’t check a floating point value against 0 or 0.0F . Instead check it against Epsilon like in the following:

#include <math.h>
#include <float.h>

int main(void)
{
	float x = 1.0F;
	x -= 1.0F;
	if (fabsf(x) <= FLT_EPSILON)
	{
		// ... The value is basically 0, do some stuff
	}
	return 0;
}

Alternatively you can choose a fractionally small number like 0.0001F to check against if that is your cup of tea as well. The reason is floating point precision errors (which you probably know or have heard of by now). I enjoy FLT_EPSILON because it is part of the float.h lib and a standard for everyone to use.

Zero Set Your Structs

One thing that would get me when developing in C is pointers inside of objects not being set to NULL . Now I know I speak about hating that the idea of NULL exists, but when working with other people’s code it is impossible for you not to run into a situation where you need to set a pointer to NULL , pass a NULL or check a pointer against NULL . So do yourself a favor and always use calloc (or memset(&thing, 0, sizeof(thing)) if it isn’t a pointer or new memory). Of course this doesn’t ban the use of malloc , in fact you should continue to use it on buffers, but as programmers, we have a problem with not touching code that works and you think it is fine to just add in that extra field, but if it is a pointer and you don’t initialize it to NULL where needed, you’re in for a world of hurt.

Big types first

When you create a structure, put the biggest types at the top of your struct and the smallest types at the bottom. Platforms like the x86 will magically help you with this (at a cost), but other platforms (like ARM) will generate SEGFALT if you don’t properly do this. This is because of padding in a struct. If you put a bool as the first field and a int32_t as the second field in a struct, like the one below, you will have a problem where you pack 1 byte, then 4 bytes into the struct, effectively having a 5 byte struct. The problem here is that the CPU is optimized to read along memory boundaries. When you malloc , you won’t get an odd memory address for example.

struct Bad {
	bool first;
	int32_t second;
};

struct Good {
	int32_t first;
	bool second;
};

More to come

There are inevitably more things I’ve forgotten about, but I’ve written this all in one sitting so this is good enough for now until I can update!

Join the on-call roster, it’ll change your life

Lobsters
serce.me
2025-12-09 11:15:34
Comments...
Original Article

Imagine you are a software engineer a few years into your career, working on a service that is an important part of a large production system. One day in a routine 1:1, your manager mentions that there is a gap in the on-call roster and asks if you would like to join the rotation. The concept is simple, if the error rate spikes while it is your turn in the rotation, you'll get an alert. When that happens at any time of day, you'll need to open your laptop, investigate and restore the system. They even mention there is extra pay.

Chances are you can simply say "no" and keep your life exactly as it is. Or you can say "yes" and allow on-call to start changing you in ways you probably didn't anticipate.

Maybe you've already gone through this journey yourself. Or maybe you'll have to answer this exact question tomorrow. If so, I can't tell you what you should say, but I can tell you, in this article, how saying "yes" changed my life.

The good

To give some context, I joined an on-call rotation for a production service for the first time about nine years ago, none of my previous roles had involved being on-call. For most of that time I was on-call for a critical front-facing production service, the kind of service that shouldn't ever go down. And yet, every so often it would.

As an additional complication, I live in Australia, and the rest of the world is wide awake when we are asleep. So it's quite common for the peak traffic, which is a frequent trigger for incidents, to happen when it's the middle of the night here – waking the on-call engineer up. It's worth noting that this is less of an issue for giant companies that have engineering offices across all major timezones, but very few startups and smaller companies can afford this luxury.

You learn to deal with stress

When you receive an alert at 2am, you have to wake up, understand what's going on, fix the problem and provide updates, and all of this happening in the middle of the night can be incredibly stressful. This "on-call dance" is often hard to get used to because the causes are rarely the same, and yet after you go through this drill a few times, you learn to deal with it. This isn't simply because you got used to the process, you learn to deal with emergencies in general.

I've noticed this in myself – it's much easier to stay calm and act with a cool head in a real-life emergency situation when you've been through many incidents while on call before. Even though sitting in bed with a laptop at 2am and dealing with a severe injury in the middle of the forest look completely different, in both situations you're dealing with high amounts of stress.

You learn leadership and coordination skills

Not every incident occurs at 2am however, but it's not uncommon that when the whole system goes down, it turns into an all-hands-on-deck situation. Someone needs to start the mitigation using feature flags, someone might need to investigate the initial trigger, someone needs to provide a summary. As you deal with more and more incidents, at some point you'll find yourself leading the response, and these are important leadership and coordination skills that could be quite hard to acquire in other situations.

You learn systems in incredible depth

The incident response doesn't end after an hour of firefighting, it's followed by tens of hours of digging into what happened , and then making sure that it doesn't happen again. During those long debugging sessions, as you dissect the system and try to reconstruct the exact sequence of events, you learn the systems you work on at a much more intimate level than you ever would by just shipping changes.

You develop an appreciation for how systems interact, for CPU and memory constraints, for the intricacies of the runtime, all of which surface later as you design the future versions of those systems. You know what can go wrong, because you've seen those failure modes multiple times. Writing those detailed internal post-incident reports is what tuned my engineering thinking the most. Not to mention, many of those firefighting experiences become great stories to share.

The bad & the ugly

It's not all great and rosy, though.

You acquire physical constraints

When you're on-call, your life is inherently constrained by the need to always have your laptop with you and to stay within the bounds of reliable reception. How limiting this could feel depends on your lifestyle – if you spend your weekends tinkering with robots in your garage, that's not much of a problem, but if you're hiking in the mountains every weekend, being on-call quickly becomes a real burden.

The potential health effects

You're woken up at 1am, you start firefighting, the adrenaline is high, you mitigate the issue and go back to bed at 2am, and yet you struggle to fall asleep. Your mind is still racing, and only 30 minutes to an hour later do you finally fall back asleep. The next day you feel exhausted, your mind is foggy and you can't think straight. This is a situation that's unfortunately familiar to many on-call engineers, and depending on how often it's repeated, it can have adverse effects on your health. There have been multiple research papers highlighting the negative impact of sleep disruption and on-call work specifically , and it can even affect the quality of relationships .

This is the part that is easy to normalise but you shouldn't. Don't let yourself become so used to nightly alerts that you treat them as normal background noise. Find a way to fix the situation or find a way out. Every benefit listed above isn't worth much if the long-term price is your health, and being on call is only worth it if you can minimise the impact on your health.

Conclusion

I'm a firm believer in the "you build it, you run it" model, and I'm on-call as I'm writing this article. Luckily for me, the forecast for the weekend isn't great, so I'm not exactly missing out on a perfect day in the mountains.

If you're deciding whether to join an on-call rotation, I'd suggest giving it a try. It's not a one-way door, you can always reverse your decision. No judgement if you decide to say "no" either. I hope my experience helps you make that call. And if you're someone with years of on-call behind you, please do share your experience as well, and I'm sure you've collected plenty of great firefighting stories to share.

Subscribe

I'll be sending an email every time I publish a new post.

Or, subscribe with RSS .

Richard Stallman on ChatGPT

Hacker News
www.stallman.org
2025-12-09 11:12:07
Comments...
Original Article

Richard Stallman's personal site.

https://stallman.org

For current political commentary, see the daily political notes .

RMS's Bio | The GNU Project


ChatGPT is not "intelligence", so please don't call it "AI".

I define "intelligence" as being capable of knowing or understanding, at least within some domain. ChatGPT cannot know or understand anything, so it is not intelligence. It does not know what its output means. It has no idea that words can mean anything.

I call it a "bullshit generator" because it generates output "with indifference to the truth" .

The same applies to many other "generative systems", for the same reasons

The widespread public error of attributing intelligence to those systems leads millions of people to a misplaced trust for them. Please join me in spreading the word that people should not trust systems that mindlessly play with words to be correct in what those words mean.

Another reason to reject ChatGPT in particular is that users cannot get a copy of it. It is unreleased software -- users cannot get even an executable to run, let alone the source code. The only way to use it is by talking to a server which keeps users at arm's length.

Doing your own computing via software running on someone else's server inherently trashes your computing freedom .


Return to Richard Stallman's home page .

Please send comments on these web pages to rms@gnu.org .

Copyright (C) 2024 Richard Stallman

Verbatim copying and distribution of this entire page are permitted in any medium, provided this notice is preserved.


Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns

Hacker News
algodrill.io
2025-12-09 11:09:06
Comments...

ZX Spectrum Next on the Internet: Xberry Pi ESP01 and Pi Zero Upgrades

Hacker News
retrogamecoders.com
2025-12-09 10:52:58
Comments...
Original Article
Home » Retro Game Coders Blog » Retro Emulators and FPGA
ZX Spectrum Next on the Internet

Getting my ZX Spectrum Next onto Wifi and the Internet, plus Pi Zero Accelerator

I’m enjoying my Xberry Pi ZX Spectrum Next , but I have to say the ‘simple’ upgrade of getting it onto Wifi via the cheap and cheerful ESP 8266 was not fun.

While I had every intention of setting up my Speccy keyboard on the board’s matrix connector, I need to wait for a socket of the correct pitch. It turns out the Xberry Pi needs 0.2mm spacing and I have none in my parts box. Instead I figured do the other main upgrades, namely add a Pi Zero “accelerator” and the Wifi upgrade.

ZX Next Pi Zero Accelerator Upgrade

Pi Zero on the Xberry Pi ZX Next

Fully expecting the Pi Zero to be the thing that caused me trouble, I was shocked that not only my soldering had gone perfectly but also the flashing of the SD card.

So many community members had advised to use a highly specific Pi Zero, I thought my spare board would be wrong. But it worked first time!

My pin header made the soldering trickier than usual, maybe it soaked a lot of heat? Just remember the header needs to go on the ‘wrong’ side and take your time.

The Pi Zero upgrade is hardly used currently but the ZX Next team keep hinting at future utility, and at least one game is in development that uses it as a kind of math coprocessor.

Loading a TZX tape on the ZX Spectrum Next
Loading a TZX tape on the ZX Spectrum Next

So after running a TZX file and some SID tunes, I moved on to the ESP figuring I would have everything done quick as a flash. Pardon the pun.

Adding Wifi to the Next

The Next can come with Wifi already installed apparently, and some Xberry bundles come with one as standard, but the chosen module is an ESP 01 / 8266 which is something I have on hand.

There’s precious little documentation for this kind of thing out there, but I did find YouTube videos that made everything look very straightforward. That should have been the first red flag!

None of them showed installing the correct ESP firmware, what baud rate the Next expects, or even attaching the ESP.

I picked out a board I was confident was working fine, attached it, successfully ran the ESP Updater .

After it confirmed the update, I tried to connect to wifi using the built-in Next menu item …. and failure.

The fact the ESP Updater did its thing suggests also the board was fine therefore I tried and failed all the many ways to get the “driver” to install.

I quickly started to hate this “Installing Driver…” message. There is no indication if anything is happening when it gets stuck there for ages, but other times it quickly quits back to the menu, again with no feedback.

Spoiler alert : Even with everything now ‘working’, it often takes a lot more attempts than you would expect.

To save anyone else the bother I went through, and potentially Future Chris, the key was to NOT update the ESP 01 firmware. Just use whatever your board came with by default. My working firmware is not even listed on the ESP Updater repo …

This ESP 8266 firmware works fine if you need to flash anything
This ESP 8266 firmware works fine if you need to flash anything

I rummaged around and found another ESP board that had never had its firmware flashed. Using a breadboard I confirmed it was working and responding to AT commands.

CircuitPython Code to Communicate with ESP01 via UART

import board,binascii
import adafruit_pio_uart
# Define the UART pins (example using GP16 for TX and GP17 for RX)
# The actual pins depend on your specific wiring/board configuration.
UART_TX = board.GP16
UART_RX = board.GP17
# Initialize the UART connection
# The baudrate for the ESP01 is typically 115200
uart = adafruit_pio_uart.UART(UART_TX, UART_RX, baudrate=115200)

def send(s):
    uart.write(s+'\r\n')
    print(uart.read(255).decode())
send('AT+RST')
send('AT')
send('AT+GMR')

This code allows you to hook the ESP01 up to pins 16 and 17 for UART, and 3.3v and GND.

ESP 01 Pins for communicating on Pico

Wifi on the ZX Spectrum Next … Finally!

Again it didn’t work on the ZX Next right away, but I persisted and finally signs of life!

Attach the Wifi module to the Xberry Pi
Use .ESPBAUD dot command with -d to check the baud rate (needs to be 115200)
Using .UART you can manually send AT commands to test the Next is communicating over TX/RX
Run the Wifi Wizard from the Next menu
All being well, run Wifi stuff such as NXTel!

Conclusion

Hope this helps someone else out there! I found it incredibly frustrating but now I am back to having fun with my ZX Next experience rather than regretting my choices.

Seems life might have been easier if a Pi Zero W was used instead of a Pi and an ESP8266, or even an ESP32, but perhaps there is a good reason they went with the little old ESP01.

Skate Story review – hellish premise aside, this is skateboarding paradise

Guardian
www.theguardian.com
2025-12-09 10:00:28
Sam Eng/Devolver Digital, PC, PS5, Switch 2An exquisitely fluid game of tricks, grinds and manuals is framed by a story that uncovers the poignancy of the infamously painful pastime Skateboarding video games live and die by their vibe. The original Tony Hawk’s Pro Skater titles were anarchic, arcade...
Original Article

S kateboarding video games live and die by their vibe. The original Tony Hawk’s Pro Skater titles were anarchic, arcade fun while the recent return of EA’s beloved Skate franchise offered competent yet jarringly corporate realism. Skate Story, which is mostly the work of solo developer Sam Eng, offers a more impressionistic interpretation while capturing something of the sport’s essential spirit. It transposes the boarding action to a demonic underworld where the aesthetic is less fire and brimstone than glittering, 2010s-era vaporwave. It is also the most emotionally real a skateboarding game has ever felt.

The premise is ingenious: you are a demon made out of “pain and glass”. Skate to the moon and swallow it, says the devil, and you shall be freed. So that is exactly what you do. You learn to ollie first, a “delicate, precise trick” according to the artfully written in-game text. Then come the pop shuvit, kickflip, heelflip and more.

Skate Story.
Captures the spirit of skateboarding … Skate Story. Photograph: Devolver Digital

The controls are easy: one button to ollie. If you’re holding down a shoulder button at the same time, you perform a more involved trick. Beyond the ravishing visuals, what’s most striking is the exquisite fluidity, the delicious “gamefeel”, of the actual skateboarding: the way the knees of this glittering demon bend just the right amount after landing a trick; the way you can see their foot stretching out across the top end of the board in order to apply just the right force that will cause it to flip.

The vaporwave aesthetic is not Skate Story’s only bold design choice. You will fall many times on the ghoulish asphalt and when you do the action cuts to first-person, causing you to see the world tumbling for what feels like a tormenting eternity. Along the way, you meet a bizarre cast of characters: a mystical rabbit, a pigeon trying to type a screenplay, and a ghost hanging out in a launderette.

Skate Story game
Real emotions … Skate Story. Photograph: Devolver Digital

The game’s action can be divided into two types: narrow, linear tunnels that you hurtle through at breakneck speed, and wide-open sandbox levels. The former are furious, momentum-filled thrill rides that demand utmost precision; the latter, set in nightmarish, nocturnal visions of New York, feature many offbeat objectives, such as chasing spooky laundry. In these levels, there is ample room to enjoy the deceptively deep skating mechanics.

Gradually, a melancholy surfaces in this crystalline universe. Of course, the skateboarder wants to be free of the underworld, but they also seem enraptured by the idea of devouring these moons. As you thread together tricks with manuals and grinds, scoring ever-larger combos, all as a brilliantly downbeat electro soundtrack chimes away, questions arise. Why is this skateboarder so hungry? Why do they seek pain? In some ways, we’re reminded of the physical risks of skateboarding in real life.

These questions – and the sadness buried within their answers – distinguish Skate Story from its traditionally zany video game counterparts. Rather, Eng’s gently emotive work is more in touch with the likes of acclaimed documentary Minding the Gap and Jonah Hill’s movie Mid90s .

The result is a skateboarding game of rare poetry. There is the poetry of the skating itself, the miraculous interplay of body and board rendered with aplomb. There is the actual poetry that accompanies the end of each level. Finally, there are the tender emotions that refract through, and seem amplified by every bailed kickflip in this surreal, shimmering take on hell.

Compiler Engineering in Practice - Part 1: What is a Compiler?

Lobsters
chisophugis.github.io
2025-12-09 09:55:11
Comments...
Original Article

“Compiler Engineering in Practice” is a blog series intended to pass on wisdom that seemingly every seasoned compiler developer knows, but is not systematically written down in any textbook or online resource. Some (but not much) prior experience with compilers is needed.

The first and most important question is “what is a compiler?”. In short, a compiler is:

  • a translator that translates between two different languages, where those languages represent a description of a computation, and
  • the behavior of the computation in the output language must “match” the behavior of the computation in the input language (more on this below).

For example, an input language can be C, and the output can be x86 assembly. By this definition, an assembler is also a compiler (albeit a simple one), in that it reads x86 textual assembly and outputs x86 binary machine code, which are two different languages. The python program that executes Python code contains a compiler – one that reads Python source code and outputs Python interpreter bytecode.

This brings me to my first important point about practical compiler engineering – it’s not some mystical art. Compilers, operating systems, and databases are usually considered some kind of special corner of computer science / software engineering for being complex, and indeed, there are some corners of compilers that are a black art. But taking a step back, a compiler is simply a program that reads a file and writes a file. From a development perspective, it’s not that different from cat or grep .

Why does this matter? Because it means that compilers are easy to debug if you build them right . There are no time-dependent interrupts like an operating system, async external events like a web browser, or large enough scale that hardware has to be considered unreliable like a database. It’s just a command line program (or can be reduced to one if engineered right), such that nearly all bugs are reproducible and debuggable in isolation from the comfort of your workstation . No connecting to a flaky dev board, no extensive mocking of various interfaces.

You might say – wait a minute – if I’m running on my company’s AI hardware, I may need to connect to a dev board. Yes, but if you do things right, you will rarely need to do that when debugging the compiler proper. Which brings me to…

Reliability

Compilers are like operating systems and databases in that the bar for reliability is extremely high. One cannot build a practical compiler haphazardly. Why? Because of miscompiles.

Miscompiles are when the compiler produces an output file in the output language that does not “match” the specification of its computation in the input language. To avoid a miscompile, the output program must behave identically to the input program, as far as can be observed by the outside world, such as network requests, values printed to the console, values written to files, etc.

For integer programs, bit-exact results are required, though there are some nuances regarding undefined behavior, as described in John Regehr’s “laws of physics of compilers” . For floating point programs, the expectation of bit-exact results is usually too strict. Transformations on large floating point computations (like AI programs) need some flexibility to produce slightly different outputs in order to allow efficient execution. There is no widely-agreed-upon formal definition of this, though there are reasonable ways to check for it in practice ( “atol/rtol” go a long way).

How bad is a miscompile?

Miscompiles can have massive consequences for customers. A miscompile of a database can cause data loss. A miscompile of an operating system can cause a security vulnerability. A miscompile of an AI program can cause bad medical advice. The stakes are extremely high, and debugging a miscompile when it happens “in the wild” can easily take 3+ months (and it can take months for a customer to even realize that their issue is caused by a miscompile).

If that weren’t enough, there’s a self-serving reason to avoid miscompiles – if you have too many of them, your development velocity on your compiler will grind to a halt. Miscompiles can easily take 100x or 1000x of the time to debug vs a bug that makes itself known during the actual execution of the compiler (rather than the execution of the program that was output by the compiler). That’s why most aspects of practical compiler development revolve around ensuring that if something goes wrong, that it halts the compiler before a faulty output program is produced .

A miscompile is a fundamental failure of the compiler’s contract with its user. Every miscompile should be accompanied by a deep look in the mirror and self-reflection about what went wrong to allow it to sneak through, and what preventative measures can (and should immediately) be taken to ensure that this particular failure mode never happens again.

Especially in the AI space, there are lots of compilers that play fast and loose with this, and as a result get burned. The best compiler engineers tend to be highly pedantic and somewhat paranoid about what can go wrong.

Why compilers are hard – the IR data structure

Compilers do have an essential complexity that makes them “hard”, and this again comes from the whole business of making sure that the input program and the output of the compiler have the same behavior. To understand this, we have to discuss how a compiler represents the meaning of the input program and how it preserves that meaning when producing the output program. This notion of “meaning” is sometimes called the program semantics .

The primary data structure in a compiler is usually some form of graph data structure that represents the compiler’s understanding of “what computation this program is supposed to do”. Hence, it represents the computation that the compiler needs to preserve all the way to the output program. This data structure is usually called an IR (intermediate representation). The primary way that compilers work is by taking an IR that represents the input program, and applying a series of small transformations all of which have been individually verified to not change the meaning of the program (i.e. not miscompile). In doing so, we decompose one large translation problem into many smaller ones, making it manageable.

I think it’s fair to say that compiler IR’s are the single most complex monolithic data structure in all of software engineering, in the sense that interpreting what can and cannot be validly done with the data structure is complex. To be clear, compiler IR’s are not usually very complex in the implementation sense like a “lock-free list” that uses subtle atomic operations to present a simple insert/delete/etc. interface.

Unlike a lock-free list, compiler IR’s usually have a very complex interface, even if they have a very simple internal implementation. Even specifying declaratively or in natural language what are the allowed transformations on the data structure is usually extremely difficult (you’ll see things like “memory models” or “abstract machines” that people spend years or decades trying to define properly).

A very complex schema

Firstly, the nodes in the graph usually have a complex schema. For example, a simple “integer multiply operation” (a node in the graph) is only allowed to have certain integer types as operands (incoming edges). And there may easily be thousands of kinds of operations at varying abstraction levels in any practical compiler, each with their own unique requirements. For example, a simple C * (multiplication) operator will go through the following evolution in Clang:

  • It first becomes Clang’s BinaryOperator node, which takes two “expressions” as operands (which may be mutable uint32_t values, for example).
  • It will then be converted to an LLVM IR mul operation, which takes as operands an llvm::Value , which represents an immutable value of the i32 type, say.
  • It will then be converted to a GlobalISel G_MUL operation, whose operands represent not only an 32-bit integer, but also begin to capture notions like which “register bank” the value should eventually live in.
  • It will then be turned into a target-specific MIR node like IMUL32rri or IMUL32rr selecting among a variety of physical x86 instructions which can implement a multiplication. At this level, operands may represent physical, mutable hardware registers.

From a compiler developer’s perspective, all these “multiply operations” are deeply different from each other because of the different information captured at each abstraction level (again, compiler developers are usually very pedantic). Failing to adequately differentiate between abstraction levels is a common disease among poorly written compilers.

At every level, precise attention to detail is needed – for example, if the multiplication is expected to overflow mod 2^32 in the source program, and we accidentally convert it to overflow mod 2^64 (such as by using a 64-bit register), then we have introduced a miscompile. Each operation has its own unique set of constraints and properties like these which apply when transforming the program.

Complex interactions between operations

Additionally, how these operations in the IR graph relate to each other can be very complex, especially when mutable variables and control flow are involved. For example, you may realize that an operation always executes, but we may be able to move it around to hide it under an if condition to optimize the program. Consider the program:

x = y + z;
...
if (condition) {
    print(x); // The only time that `x` is referenced.
}

Is it safe to convert this to

...
if (condition) {
    print(y + z);
}

? Well, it depends on what’s hidden in that ... . For example, if the program is:

x = y + z;
...
y += 5;
...
if (condition) {
    print(x);
}

Then it’s not legal, since by the time we get to the if , the value of y will have changed and we’ll print the wrong value. One of the primary considerations when designing compiler IR’s is how to make the transformations as simple and obviously correct as possible (more on that in another blog post).

Usually production compilers will deal with IR graphs from thousands to millions of nodes. Understandably then, the compounding effect of the IR complexity is front and center in all compiler design discussions. A single invalid transformation can result in a miscompile.

Compilers are just software

Practical compilers are often live for years or decades and span millions of lines of code, so the entire suite of software engineering wisdom applies to them – good API design, testing, reusability, etc. though usually with additional compiler-specific twists.

For example, while API design is very important for most programs’ code (as it is for compilers’), compilers also have an additional dimension of “IR design”. As described above, the IR can be very complex to understand and transform, and designing it right can greatly mitigate this. (more on this in a future blog post)

Similarly, since compilers are usually decomposed into the successive application of multiple “passes” (self-contained IR transformations), there are a variety of testing and debugging strategies specific to compilers. (more on this in a future blog post).

Conclusion and acknowledgements

I hope you have found this post helpful. I have a few more sketched out that should be coming soon. Please let me know on my LinkedIn if you have any feedback or topics you’d like to suggest. Big thanks to Bjarke Roune for his recent blog post that inspired me to finally get this series off the ground. Also to Dan Gohman for his blog post on canonicalization from years back. There’s too few such blog posts giving the big picture of practical compiler development. Please send me any other ones you know about on LinkedIn.

Stay tuned for future parts of this series:

  • Modern Compilers in the Age of AI
  • Organizing a Compiler
  • Testing, Code Review, and Robustness
  • The Compiler Lifecycle

The Joy of Playing Grandia, on Sega Saturn

Hacker News
www.segasaturnshiro.com
2025-12-09 09:48:55
Comments...
Original Article

The Renaissance Period

We are living through a Saturn renaissance. Buckets of titles previously locked away in Japan are seeing new audiences, thanks to the herculean efforts of small but dedicated teams of enthusiast translators, removing the veil of Japanese illiteracy from before our tired eyes. Interestingly, the majority of efforts are being directed at the games with the biggest scripts, and no other genre was as impacted by the language barrier as the text-heavy, story-driven RPG. Over a dozen quality titles are now playable in English. The Saturn is, once again, ascendant…

Ain’t life Grand?

Enter Grandia .

What hasn’t been said about Grandia ? In the run-up to its late 1997 release, the game enjoyed significant worldwide coverage in the gaming press, not least because some positioned it as the anti-FF7 title. Hot on the heels of the remaster of Lunar: Silver Star Story and hailing from respected software house Game Arts, featuring state of the art fully 3D environments, a score by notable composer Noriyuki Iwadare, sound effects produced by Skywalker Sound… Grandia was indeed shaping up to be one of the premier JRPG experiences of the 5th generation. There was serious talk of bringing the game out West — Working Designs was touted as the favoured house to do the honors, owing to their strong partnership with Game Arts, but the game’s massive script would have meant a late 1998 release by even the speediest conversion standards of the time. By then, the Western Saturn retail market had collapsed, and despite a shrinking but fervently dedicated base of Saturn fans holding on to hope of seeing the title cross the ocean, the game wound up locked away in Japan, forever.

Sue’s always looking out for Justin.

NEVER say Forever

Game Arts subsequently ported Grandia to the PlayStation, dropping it in Japan in the summer of 1999. Sony speedily localized the game for Western release later that same year… but we aren’t going to focus too much on the PlayStation version here because, at the time of writing, PlayStation discs don’t boot on your SEGA Saturn. It’s the Saturn game that we are concerned with. For us Saturn stalwarts, we had to wait to the mid-2020s for an intrepid team led by TrekkiesUnite113 to transplant the PlayStation’s English script into the Saturn code . By then, the game was decades old, not to mention re-released and ‘re-mastered’ on modern platforms. So, why translate Grandia for the Saturn, when multiple other English options exist?

Because Grandia is Best on Saturn.

How do you do

Set in an age of discovery at the dawn of the industrial revolution, Grandia initially tells the tale of young Justin — a 14-year-old fearless adolescent who can’t help but dream of adventure. When he isn’t playing at “hero” with his town friends, he’s dreaming of great expeditions to find the lost civilization of Angelou. He is joined by his friend Sue — an 8-year-old girl whose maturity belies her age, and who tries desperately to keep young Justin in line. Justin’s mom Lily runs the local Seagull Restaurant and does her best to raise Justin into a respectable young man… though in her youth, she was a scrappy pirate herself. In her heart, she knows her audacious spark has passed on to her son, and that Justin will one day take up the adventurer’s mantle and take off on a grand adventure of his own , so she does her best to prepare him for when the time comes.

She gives Justin a Spirit Stone — a remnant of the Angelou civilization and a memento of his long-lost father — and in doing so, helps kick off a fantastic voyage that sees young Justin explore, learn, overcome all manner of obstacles, and ultimately, grow and become the hero that he always imagined himself to be.

The party’s travels take them to the most interesting locations.

During his quest, Justin encounters fascinating characters, both friend and foe. From quiet folk in sleepy villages to rambunctious youngsters eager for their own slice of adventure; from military platoons led by the most beautiful — but hopelessly shallow — lady sergeants to cunning merchants, towering warriors, alluring mermaids and ferocious dragons… Justin encounters them all, and for good or ill, manages to change the course of their lives in ways both subtle and grand.

Justin, Sue, and Feena are the first three playable characters in Grandia . Young Sue tries to keep Justin in line, while Feena searches for the true meaning of being an adventurer – with Justin slowly moving from admiring her to showing her the way.

The game is clever in pulling the player in for a ride that for a very long while feels very lighthearted and innocent. Even as Justin’s adventure begins in earnest and the player is exposed to antagonists, mysteries, undercurrents and intrigues, Justin can’t help but distill it back to the very pure essence of boyhood adventure. Mysterious tower causing problems for a nearby village for years? No problem, Justin will fix it! A dragon from a nearby volcano terrorizing the locals? Justin’s got this. A ghost ship sailing in to harass a passenger steamer? Justin is the answer, in the same way that, as youngsters, we all knew – we knew! – that WE were the heroes , and that WE would save the day, armed only with our courage and our grand imaginations. It was our duty, after all. We had it in us to go forth boldly, and change the world (and naturally, all before being called home for dinner).

This point is driven home by Justin’s insatiable desire to uncover the mystery of his Spirit Stone, and the ancient Angelou civilization. After an unfortunate but entirely predictable mishap in the local museum, followed by a mysterious revelation in the nearby Sult Ruins, Justin’s curiosity is ignited, and his drive for real adventure becomes indomitable. Meanwhile, forces are at work that care not for Justin’s explorations, and inevitably, the lad finds himself pitted against the Garlyle Forces and some of its top commanders. Their aims are complex and their operations span the world, and this scope creates a wonderful juxtaposition with Justin’s innocent demeanor and singular focus.

The amount of architecture being displayed here is stunning, though Grandia makes the Saturn work for it.

On Screen!

The Fifth Generation of consoles marked the rise of 3D graphics, but some genres made the leap easier than others. This shift was a struggle for RPGs, with many excellent titles continuing to employ 2D visuals, albeit in richer color and more sophisticated detail than seen in previous generations. Early attempts at the 3D RPG ( Virtual Hydlide ) highlighted how difficult it was to run this style of game on the hardware of the time without wrecking the framerate or keeping textures from looking like a checkerboard mess. Dungeon crawlers ( Shining the Holy Ark ) were among the first titles to get the 3D right, though the player’s scope of movement was very restricted. Finally, some fantasized that “3D” meant pre-rendered backgrounds and copious FMV clips, with the only real 3D being battle scenes. Ahem!

Grandia took the traditional overhead RPG view and transformed the landscapes into fully realized 3D polygonal playfields that can be rotated and zoomed at will. Character and enemy sprites are then overlain on the 3D world to make the scenes come to life. The addition of the third dimension affords the use of depth in the environments: hills, cliffs, and valleys; minecar rails that ran higher or lower relative to other tracks, and so on. In this way, the player initially feels right at home with a view that looks comfortably familiar, but must quickly learn to constantly rotate the viewpoint to catch enemies in hiding, spy treasures only visible from certain angles, judge heights, and evaluate other geometric details to plot their best course forward.

Aside from technical achievements, the art direction is fantastic.

Grandia wastes no time in getting gamers used to this new visual paradigm. One of the game’s first quests sees local frenemy Gantz challenge Justin and Sue to locate the three Legendary Treasures: the fabled helmet (Iron Pot), the storied shield (Pot Lid), and of course, the legendary (Wooden) Sword. The player must traverse all of Parm, climbing down river walkways, checking in enclosed spaces, and chasing down Gantz’s little brother to prove they are up to Gantz’ task — and in the process, get used to the game’s then-new control scheme.

The 3D is very well put together, both technical and artistically. The level of detail is truly phenomenal, from the tiniest objects and details, especially in the ‘in-town’ game sections. Justin is able to interact with some of the innocuous scenery — for example he can knock brooms over, disturb piles of plates, or bump into bells and chimes — just as any real, overly excited 14-year-old might clumsily do as they ran along. Animations, from little weathervanes rotating to laundry fluttering on a clothesline, to puffs of smoke coming up from fires or chimneys, all accentuate the feeling that these are real, living, bustling places. The level of detail, and all of it in 3D, is really special.

The coders at Game Arts made excellent use of the Saturn’s unique hardware when realizing Grandia ’s locales. Where appropriate, textured infinite planes are used to draw floors, and they not only look good but also dramatically cut down on the usage of polygons in drawing the scene, leaving that much more in the processing budget to spend on other visual details. In later sections, those infinite planes take on a distortion effect to create some very cool-looking water flows — look for them initially in Parm’s pier, and later in places like the snowy Laine Village or the mysterious Castle of Dreams. The water shimmers as the player rotates their view to create a truly stunning effect.

Slimes are never that tough to dispatch in any RPG.

The game’s characters and enemies are all represented by sprites that animate quite well and take viewpoints into account as the player rotates the camera. In yet more attention to detail, the sprites darken and then lighten again as the player moves in and out of shadowed areas — an impressive little detail that accentuates the visuals even further.

The trio of General Baal, Colonel Mullen, and Leen is introduced in the game’s opening scene, and all three are more than they appear.

The care that Game Arts took in crafting the visuals is commendable and Grandia comes off as one of the very best-looking 3D RPGs for the system, but Game Arts was perhaps a mite too ambitious. There are sections of the game where the framerate really chugs. Now, it must be acknowledged that low framerates were a hallmark of many 3D games in the 32-bit era, so some of this is to be expected, but the more detail Grandia is trying to show you, the more you will feel the Saturn huffing and puffing to get the job done. The game’s 3D framerate is not high at the best of times but it is passable, so it’s somewhat of a relief that the areas where it truly takes a dive aren’t too common.

Pump up the Jam!

Game Arts’ attention to detail extends to the sound department. For Grandia , Game Arts commissioned Skywalker Sound to handle the game’s sound effects. The result is positional sound — effects like running water, crackling fire, etc. will fade in and out as Justin and co. move closer in or further away from the source. Often, if the effect is important, it will also somewhat dampen the volume of the BGM as it plays out. Additionally, the effects will pan left or right depending on the source, and especially as the player rotates the camera. These effects may be subtle, but they are very well implemented and add to the game’s overall polish.

The game is very colorful.

The game’s soundtrack was composed by Noriyuki Iwadare and is both varied and excellent. Iwadare’s use of instruments appropriate to the on-screen action is uncanny — for example, running around Parm we are treated to an industrial sounding theme, perfect for the town’s motif. The varied use of strings, drums and winds is frankly excellent and lends to the atmosphere, imitating the clang of metal and steel which so permeates the city. Equally impressive is that the music somehow manages to be exciting or somber or poignant without ever sounding overly (excuse the wordplay) grandiose. This keeps the soundtrack in line with the game’s more lighthearted narrative. Of course, where appropriate, the soundtrack does take on that epic quality. The desperate tones that play when the Garlyle forces appear contrast so well with the carefree, upbeat “Off Runs Sue” tune. Mullen’s theme is at once wistful and ambitious, and even the theme from the Sult Ruins dungeon is perfectly mood-setting. Multiple Grandia soundtracks have been released since the game’s debut and the soundtrack is universally praised.

Leen is one of Col. Mullen’s acolytes.

How it Plays Out

Grandia ’s gameplay, like so many RPGs before it, is split into two major gameplay slices: exposition-laden town sections and combat-focused dungeons.

Players will spend a fair bit of time in the ‘in-town’ sections of the game. Here, you will wander around, take in the scenery, interact with the NPCs of the area, and almost always, find a quest that must be completed. A quick word about the NPCs — there are quite a number of them in each town, and everyone has something interesting to say… and almost always, each NCP has at least two separate conversation sequences to offer, making for a truly large amount of story to soak in. And it’s all entirely optional! It’s completely possible to make one’s way through Grandia with only minimal NCP interaction, but the option to enhance the adventure with these extensive NPC interactions is always there, as each character will present a unique view or focused response.

An unlikely pairing.

Predictably, the towns are filled with shops, though Grandia keeps things rather simple — there is but one general store which carries weapons, armor, accessories, and even magic all under the same roof. Buy, sell or trade up to the latest gear which gradually increases in the stat boosts it confers to your characters. Additionally, each town typically has one or more important locales, such as mayors’ offices or the chambers of village chiefs.

There is typically an inn or other house where the party can take rest, and at certain points in the game, resting triggers a shared meal scene that sees Justin break bread with his party mates. These meal scenes offer up critical dialogue, which the gamer can extend or keep short at their whim. When the critical conversation has been had, a bedtime icon will appear over Justin’s character sprite, and if the player is quite finished listening to the party chatter, they can select it to end the meal and get some rest. These mealtime conversations serve not only to flesh out what the party must tackle next, but also to offer a glimpse into the inner thoughts of the individual characters as they share their opinions, hopes and fears. Like so much in the game, Grandia implements this character exposition in a way that allows the player to decide how much of it to take in.

Great use of color.

The visuals in the town sections really stand out. The Saturn manages to shift not only impressive amounts of polygons for the various structures, but also vivid and complex textures. This technical prowess is coupled with lush and imaginative art direction, resulting in each locale feeling complete and distinct. The dense, humid and green surrounds of Luc Village, nestled deep within the Misty Forest and inhabited by humanoid creatures contrasts sharply with the seaside port town of Dight with its cerulean waves gently rolling in onto its sandy shores. Milda’s hometown village of Laine is covered in snow, and the ancient Zil Padon is an architectural wonder with a central fountain in the middle of the Savanna desert. Game Arts very clearly discarded their standard world building cookie cutters, and their efforts shine through.

The world map. The feather icon indicates where you will travel next.

Once a locale has been explored, it will appear as a selectable destination on a gorgeous, hand-drawn high-resolution world map. Exiting an area often brings our party to this world map, and the next destination can be selected.

If the towns serve to heal the party, upgrade equipment, and advance the story, then the dungeons of the game offer treasure hunting, exploration, and of course, combat! Dungeons in Grandia range from literal underground labyrinths to above-ground forest mazes, to even large open plains that Justin et al. must traverse. Some of the more noteworthy settings include scaling a giant wall that keeps the world divided into two separate societies, negotiating the bowels of a ghost ship which appears out of nowhere to molest a transcontinental steamer, and even conquering the inside of an unstable volcano that’s inhabited by an ancient dragon.

Here, the player really must use their L and R buttons to shift the 3D landscape around, to find all the nooks of treasure or paths forward. Some areas feature set pieces that Justin and party can activate — for example, knocking over a loose pillar to bridge a gap. These are usually indicated by an exclamation point icon when the party nears the set piece.

Some of the spells are quite spectacular.

All the while, treasure both great and small litters the landscape… but so do enemies! Enemies are visible in the dungeons and so can be avoided to an extent, but if Justin and party come in contact with an enemy, combat ensues.

Grandia Grinder Alert!
Grind for Experience Points Using Environmental Damage!

Are YOU a Grandia grinder?? Some sections of the game will deal damage to Justin and party outside of combat. First noticed in the Dom Ruins, rock faces painted into some of the dungeon walls will cause mild HP damage by springing out and poking the party when the party doesn’t want to be poked! The player can then use heal magic and spam this process to quickly increase Water magic levels. Although definitely a grind, it’s much faster than earning those experience points via combat. A few other areas in the game present similar opportunities — such as the basement in the Castle of Dreams.

A New Kind of Kombat

Grandia introduced an all-new combat system to the RPG genre, though it could be said to be a variant of other similar RPG battle systems. Essentially, all battle participants have icons that continuously move along a universal IP Gauge , until they reach the Command point. Here, the player will enter from a selection of commands which includes attacking, using an item or a spell, guarding, or even retreating. They then wait to reach the very end of the gauge to execute their selected action, and the more experienced the character, the faster that point is reached. A ton of strategy is introduced here as during this waiting period between selecting an action and executing it, they are vulnerable to both Cancels and Counterattacks from their opponents. Unlike many contemporary RPGs where the instinct is to simply unleash physical and magical attacks in a turn-based order, the player can take advantage of these waiting periods to cancel out incoming enemy attacks and push them back on their IP gauge. The system will take some getting used to, but can be used to devastating effect, especially in the more drawn-out boss battles. It is entirely possible to strategically get in a half-dozen actions by each character and prevent a boss from retaliating during the entire sequence, by carefully timing attacks. This makes combat a lot more involved and exciting.

Cancel culture? Counterculture? Grandia’s got it all.

There are also advantages to catching an enemy unawares — player characters start much further ahead on their IP Gauge, with the reverse being true if Justin’s party is ambushed.

Players have a range of actions they can take when their IP Gauge is full, from the standard fare of using items, defending, running away, or even inspecting an enemy (is that slug-monster male or female, for example*).

Nana, Saki, and Mio are Mullen’s three she-sergeants. Serving as comedic relief, they are nevertheless quite capable opponents in battle.

By Your Powers Combined… I Am Captain Planet!

Earth. Fire. Wind. Water. These are the elemental forces that move the world, and most characters can master them! Learning magic in Grandia first requires that the party finds a Mana Egg . These rare items can then be exchanged in a shop for magic for a single character of your choice. That party member then learns the basics of your chosen magic element.

Inside of the four elements, magic spells are further split into levels, from one to three, to indicate their potency. Level 1 spells are your most basic spells and are what a character starts off with should they buy magic with their mana egg. Players that use magic in combat will gain skill points in that particular element, and those skill points are applied to all spells of that element, regardless of spell level — so, use a Level 1 Fire spell, and all levels of your Fire magic gain skill. Spell skill progression is represented by five red stars that fill up like a gauge, turning yellow as they gain experience. Greater experience shortens casting time (which, remember, is a vulnerable time as your spell can be cancelled by an opponent) and at higher levels, allows your character to learn combined element magic spells. All magic spells consume MP making them a limited resource, though a character’s overall MP capacity will grow with experience.

The snowy village of Laine. The water effects are chef’s kiss .

Outside of magic, each character can also execute special attacks that are unique to them. These attacks are usually more devastating than standard attacks and sometimes require that the character is using a particular weapon class. These, too, gain skill points represented by five red stars that slowly build up to yellow, though special attacks consume SP (skill points). SP works much the same way as MP.

Grandia Grinder Alert!
Rare Enemies Give High XP

Typically, the game’s monsters do a good job of seeking you out, but there are occasional difficult-to-catch enemies to be found as well. Notice, for instance, the Chameleon enemies in the Virgin Forest. These green creatures are shy and are hard to catch and engage. But persist, and finish them off for a huge load of experience points — well worth a grinding sesh or three.

Experience Required

Grandia has a complex (for the time) experience points system, which is cleverly segmented into several categories.

Level up!

To start, each playable character has a set of basic stats that slowly increase as they gain experience. Hit Points (HP) are your standard measure of health and these increase at level-ups. SP are your skill points, which increase the speed and potency of your special attacks, as well as unlock new special attacks as you accumulate experience. Finally, the same is true of the more traditional magic points (MP), with the difference between SP and MP being that special attacks are individualized whereas magic attacks are more common amongst party members and can be bought in exchange for Mana Eggs.

As they adventure, Justin and company will occasionally find items that slightly boost a particular stat on a permanent basis. These items are rare indeed, but as with life, incremental gains tend to compound until the effects are undeniable.

The Seed of Speed grants a permanent stat boost.

Most traditionally, defeating enemies grants experience points and accumulating the required amount grants characters a level-up, which slightly increases basic stats. Experience gained and gold / treasure collected is displayed on an after-battle screen. It is this type of XP that most contemporary RPGs concerned themselves with. Grandia ups the complexity a bit by introducing leveling for magic and skills, and further mixes things up by employing different weapon classes.

Justin and company are each capable of wielding a few different types of weapons, of which there are seven in total, ranging from swords to maces to staffs to bows. Each weapon class has its advantages and disadvantages, be it speed of use (from Command input to Execution on the IP gauge), to range, to overall damage dealt. As party members use their weapons, they gain experience in those weapon types, separately from their character experience.

The texture work is awesome throughout.

In total, Grandia features basic character experience points which boosts common stats, magic experience which results in spells being cast faster and the learning of higher-level spells for various element types, skill experience for faster execution of special attacks, and weapon experience points which increase how well a character will handle that weapon type. Cleverly, these different experience categories are implemented in such a way as to make it entirely possible for gamers to completely ignore this aspect of the game should they so fancy. Because the system is automated, gamers can pay all of it little heed and still progress and have a great time with the game. Alternately, gamers can dive right into the finer points of the system to make those minor tweaks to get their characters to exactly the state they prefer.

The mysterious Liete awaits at Alent. The enigmatic Puffy accompanies Sue wherever she goes. Lastly, Darlin is one of the many non-human denizens of Grandia.

Go with the Flow

Grandia allows up to four playable characters to form Justin’s party at any one time. As the story progresses, some of the main characters will permanently step away from the adventure, for reasons practical and dramatic alike. One such parting in particular tugs at the heartstrings — it is nothing quite as dramatic as the year’s earlier death of Aeris (Aerith) from that big RPG on Sony’s lesser 32-bit machine, but it somehow feels more relatable, and more impactful. Players ought not be surprised by the need for tissues to manage an unexpected tear or two. And here, too, Grandia innovates: a portion of a departing playable character’s magic and weapon experience points are stored in the stashing place, to be retrieved and applied to whatever character you see fit. This strengthens their legacy in your party, as well as provide a practical reason not to neglect building up a character just because they may eventually leave the party. A nice touch.

At the foot of the End of the World.

Is It Perfect?

Grand as it sounds, the game isn’t without a few small flaws. Story-wise, players will be left wanting to know more about Justin’s father and how he came to be the keeper of his Spirit Stone. He is mentioned often in the early stages of the game, but as Justin’s adventure takes off, that arc never completes. Likewise for General Baal — we eventually learn his motivations, but not so much why he has become who he is today. A really well put together villain is one with whom we can empathise; someone whose circumstance we can understand. Both with Justin’s unnamed father and with Baal, there is a feeling that we are reading a book and that the answers lie just ahead, but despite some teasing, Grandia never lets us turn the page.

Technically, the game’s 3D is solid and varied, with plenty of minor details and meticulous textures, especially in the town sections. Blending VDP2-drawn planes with solid geometry and animated sprites means the world of Grandia is beautifully rendered, but that comes at the cost of an oft-stuttering framerate. The more of Grandia ’s world we are allowed to see at once, the more the framerate suffers. Now, these were the formative years of 3D gaming, but at times, that framerate simply chugs, and it’s noticeable to the point of distraction. Thankfully, for most of the game, the framerate sits comfortably in the ‘acceptable’ space, but you won’t get through the game without feeling the Saturn sweat as it works to display all that Grandia ’s artists wanted you to see.

Special Moves. These gain experience as well.

Speaking of 3D, the game often requires the shifting of camera angles when exploring. When in long dungeons or any other large space, this can quickly become disorienting, and the player will lose their sense of direction. The game compensates somewhat for this with the addition of the compass, though its implementation is somewhat clumsy as rather than point north, it points to an exit or other objective. There is also lookout points called Dungeon Scopes , where the player is given a bird’s eye view of their current location from a default ‘north is up’ viewpoint. This helps orientating, but those lookout points are few and far between and using them tends to break up the game’s flow. Players may well find themselves keeping their camera shifting to a minimum as a result.

Lastly, a technical note: Grandia sure gives the Saturn’s laser a workout, and there are some clever pre-loading techniques implemented to keep the game flowing as smoothly as possible. The cost here is that Grandia is very sensitive to disc quality. Those that have burnt their English-patched game onto CDs and are playing on real Saturn hardware may well find the game freeze, especially in battle when calling various spells. This is VERY annoying, especially as dungeon save points are sparse, and it is not uncommon to be in the heat of a battle only to have everyone freeze with the reset button being the only escape. This is remedied by using an ODE solution that omits discs entirely, but the game’s sensitivity to the quality of your CD-R burn needs to be called out.

Hell yeah! Feena’s strongest spell.

Final Word

Grandia is great. The visuals are gorgeous, the music is appropriately evocative, the combat is frenetically strategic, and the story is well paced. Tough battles and surprise plot twists await intrepid gamers, and sub-plots occasionally weave their way into the adventure, too — especially in sections where we briefly leave Justin. On occasion, players will follow Colonel Mullen with Feena, explore the mysterious past of Mullen’s attaché Leen, or even soak in the comedic antics of the three beautiful Garlyle generals Mio, Nana, and Saki.

Ultimately, Grandia a delight to play. A total joy… but one that demands an intense time commitment . A player Justin’s age surely has the time, but what about those of us that are well into adulting? Some sections of the game, especially the longer dungeons, have few opportunities to save one’s game. In that sense, the game is a total hardcore, traditional JRPG. It is not easily digested in small play sessions, so playing Grandia is committing a huge slice of one’s discretionary time budget.

And yet, perhaps paradoxically, playing Grandia has a way of making one feel young again. Grandia is grand in the same way we ourselves felt grand as youngsters — that, armed with a stick we’ve just picked up and nothing more than our imagination, our wits, and our indomitable spirit, we could go forth boldly and change the world. That’s the beauty of a main character like Justin — he is not yet jaded; he has not yet borne the burden of grown-up problems on his shoulders . In many ways, we were all Justin (or Sue!) at one point, and the game shines a light on that part of us that is now long behind (most of) us. Perhaps the most memorable aspect of Grandia is that it allows us, for a moment all too brief, to once again be that young boy or girl full of optimism and energy, and in today’s complex and stressful world, that feels simply wonderful.

Promotional art that showcases one of the game’s most powerful moments: Justin, Sue, and Feena have climbed the wall at the end of the world, and see, for the first time, the lands on the other side.

Three Optional Dungeons

Grandia is generally a well-balanced affair, with experience accumulating at the right rate for players to progress in the game. That said, the world of Grandia plays host to three completely optional dungeons meant solely for increasing character abilities and experience — and goes so far as to explicitly point out that these areas are not part of the game’s story and are entirely optional.

The first such dungeon can be found just west of the first section of the Zil Desert. It’s a large, very dark brown multi-leveled maze with the only save point being at the entrance. The enemies are tougher than one would expect at this point in the game, but nothing is impossible for Justin et al. The key here is to find the four Soldier’s Souls, which grants access to the treasures of the dungeon, at the very end, past the boss. The boss is a remix of a previous boss from Feena’s failed wedding to Pakon and packs quite a punch. The main prize here is the excellent Godspeed Knife, which adds a huge ACT boost, to massively speed up the user’s IP gauge.

The Soldier’s Graveyard entrance.

The second optional dungeon is also found to the west but is accessible from the second part of the Zil Desert. This dungeon is very small but has perhaps the most charm. Justin and company are greeted by a mysterious Lady at the castle entrance, begging for help but also warning of a curse on the castle. Once inside, there are several rooms to visit and loot to collect. Really simplistic and set to lure the player to lower their guard, just in time to battle the formidable Lord’s Ghost boss. This guy’s TOUGH, with strong multi-character attacks and cancelling moves. Take him down to claim the awesome Lightning Sword, which gives a 50 ATK boost and, as an elemental sword, has the Zap! spell built in.

Don’t thank us yet…

The final optional dungeon is the mother of all dungeons in Grandia . Found tucked away in the Savanna Wilderness and accessible via a secret passage, the Tower of Temptation consists of an outside area and 12 (!) floors of punishing combat. Of course, the only save point is at the very start of the outside area, though Justin can activate a couple of shortcuts through the tower as he makes progress, so that backtracking to heal and save is a bit easier. Interestingly, the starting area is surrounded by six Zero Weapons – one of each kind of weapons that grants a 0 ATK value — ideal for training weapons on weaker enemies, as these will do nearly no damage.

Grandia Grinder Mini-Alert : many enemies in the Tower drop stat-increasing items, making this an ideal place to pull it all out and go for that growth.

Prepare to spend hours on this dungeon.

Each floor of the Tower features maze sections, hidden doors, powerful enemies, and of course, switches to hit. Simply by making one’s way through the tower will increase the party’s levels, as there is so much battling to do. It is not uncommon to spend hours in the Tower, so it’s a welcome fact that the Tower is entirely optional. The final three floors are all boss — yes, there are three bosses to fight in a row. No saving, no healing. The final of the three bosses is tough as nails, but the reward is well worth it — NINE amazing items to pick up, including two items from the Grinder’s Gear™ premium collection: the Astral Miracle and the Ethereal Miracle, both accessories that double weapon or magic experience gained. VERY useful, but they better be, considering the pain just endured to complete the Tower of Temptation!

The Universe is Grand…ia

Grandia went on to sell bucket-loads in Japan , especially during release week. It received a Digital Museum DLC-style disc, got a port on the mass-market PlayStation including a PS Greatest Hits re-release, and finally, a PlayStation English localization in 1999. The series continued in 2000 with the excellent Grandia 2 on Dreamcast, which itself was later poorly ported to Sony’s killer of dreams, the PlayStation 2. That system would also see the less-well received Grandia 3 , which would spell the end of the main series’ run. The series also saw several spin-off games such as Grandia Xtreme and Grandia Online . Additionally, the first Grandia was recently remade for modern consoles with the release of the Grandia HD Collection .

*Note: you cannot inspect monsters’ genders in battle. That was just a joke. Also there is no Grinder’s Gear in Grandia.

I’m Not Crying, You’re Crying!

A beautiful scene.

A bit of a personal story… The above screenshot is my favorite scene in all of Grandia. See, the game does a brilliant job of bringing us back to the days of youthful adventures where nothing at all was impossible, and despite whatever danger beset us, we knew deep down that in the end, we would be all right. But in the most subtle of ways, Grandia also covers personal growth and the passage of time.

At some point, deep into the adventure, 8-year-old Sue gets tired. At first, she temporarily leaves the party whilst recuperating at a local sick house, with everyone hoping (and the player confidently knowing) that she will get better. But… she doesn’t. She puts on a brave face and re-joins the party, going on one final quest. As the gamer, I kept looking for the herb or special item that I could find to cure her, but no such moment ever came. There never was any single wound or ailment that Sue suffered, it’s just that one day, she simply… got tired, and ultimately, had to leave the party. She was a trooper through the entire adventure; completely indispensable she was, but there was a sunset to her time on the grand adventure, and she ended up leaving far too soon for my liking.

In real life, this sometimes happens, too. People in our orbit — strong, vibrant people, whom we believe will be with us forever — sometimes, unexpectedly, undeservedly… get tired, and have to quit the great adventure. Sometimes they are even younger than us, or in better health than us, or benefitting from any number of other factors that make their leaving seem senseless and cruelly unfair. It’s a reminder of the finite nature of life, and that sometimes we are living oh so naively and innocently through what we will later call the best times of our lives.

Sometimes, we get a chance to say our goodbyes before they depart us, and this is something Justin and Feena were able to share with Sue. With tears in her eyes, even as she bade farewell, she wished for Justin to follow his dreams and complete his long quest to find Angelou. It’s this that ties all of these sentiments together, for me. We all get older. We all leave our childhood behind us and begin to lead our adult lives in earnest. Our carefree days of questing and playing our days away, confident that in the end, everything will be all right, are replaced by planning, worrying, pressure, stress, failure, and other harsh realities of life. Here, Sue reminds us of the importance of not forgetting our dreams. We may not have the time or the energy that we did then, but whatever the obstacles, we must always go boldly in the direction of our dreams, hand-in-hand with those who love us, for we, too, will one day exit the adventure. In our final moments, what sweeter satisfaction could there be than to warmly smile at those who walked with us, and to look back on our journey with pride.

Factor 0.101 now available

Lobsters
re.factorcode.org
2025-12-09 09:26:20
Comments...
Original Article

“Keep thy airspeed up, lest the earth come from below and smite thee.” - William Kershner

I’m very pleased to announce the release of Factor 0.101!

OS/CPU Windows Mac OS Linux
x86 0.101 0.101
x86-64 0.101 0.101 0.101

Source code : 0.101

This release is brought to you with almost 700 commits by the following individuals:

Aleksander Sabak, Andy Kluger, Cat Stevens, Dmitry Matveyev, Doug Coleman, Giftpflanze, John Benediktsson, Jon Harper, Jonas Bernouli, Leo Mehraban, Mike Stevenson, Nicholas Chandoke, Niklas Larsson, Rebecca Kelly, Samuel Tardieu, Stefan Schmiedl, @Bruno-366 , @bobisageek , @coltsingleactionarmyocelot , @inivekin , @knottio , @timor

Besides some bug fixes and library improvements, I want to highlight the following changes:

  • Moved the UI to render buttons and scrollbars rather than using images, which allows easier theming.
  • Fixed HiDPI scaling on Linux and Windows, although it currently doesn’t update the window settings when switching between screens with different scaling factors.
  • Update to Unicode 17.0.0.
  • Plugin support for the Neovim editor .

Some possible backwards compatibility issues:

  • The argument order to ltake was swapped to be more consistent with words like head .
  • The environment vocabulary on Windows now supports disambiguating f and "" (empty) values
  • The misc/atom folder was removed in favor of the factor/atom-language-factor repo.
  • The misc/Factor.tmbundle folder was removed in favor of the factor/factor.tmbundle repo.
  • The misc/vim folder was removed in favor of the factor/factor.vim repo.
  • The http vocabulary request tuple had a slot rename from post-data to data .
  • The furnace.asides vocabulary had a slot rename from post-data to data , and might require running ALTER TABLE asides RENAME COLUMN "post-data" TO data; .
  • The html.streams vocabulary was renamed to io.streams.html
  • The pdf.streams vocabulary was renamed to io.streams.pdf

What is Factor

Factor is a concatenative , stack-based programming language with high-level features including dynamic types, extensible syntax, macros, and garbage collection. On a practical side, Factor has a full-featured library , supports many different platforms, and has been extensively documented.

The implementation is fully compiled for performance, while still supporting interactive development . Factor applications are portable between all common platforms. Factor can deploy stand-alone applications on all platforms. Full source code for the Factor project is available under a BSD license.

New libraries:

Improved libraries:

Removed libraries

  • ui.theme.images

VM Improvements:

  • More work on ARM64 backend (fix set-callstack, fix generic dispatch)

30 Year Anniversary of WarCraft II: Tides of Darkness

Hacker News
www.jorsys.org
2025-12-09 09:13:56
Comments...
Original Article

It has now been 30 years since WarCraft II: Tides of Darkness was released. After the great response to Warcraft: Orcs and Humans, released in November 1994, Blizzard began working on Warcraft II: Tides of Darkness. Development stared in the first months of 1995, and the game was released in North America and Australia on December 9, 1995.

While WarCraft: Orcs and Humans had laid the foundations of the series — arguably even for the RTS genre at a whole — it was really WarCraft II that took things to new heights. More units could be selected at once, the player could right-click to issue commands, naval and aerial combat was introduced, and buildings and units could be upgraded. The graphics were more vivid and visually appealing, and features like the Fog of War was introduced, where you could only see in the vicinity of your own units — unlike in the first game, where you could indefinitely see any area you had previously visited, you now had to continuously scout the map.

WarCraft II: Tides of Darkness

Many things still resembled the first game. The two factions — the Humans and the Orcs — were balanced through their similarites. For every unit and building of one faction, the other had one that was functionally equivalent, and so the sides largely mirrored each other. The only real differences lay in the spells available to their higher-level units. In that regard, the clear winners were the Orcs, who had a tremendous advantage thanks to the incredibly powerful and unbalanced Bloodlust spell of the Ogre-Magi.

It is quite impressive that Blizzard managed to release a title of such quality in such a short span of time, especially considering that the overall design and gameplay evolved during development. Originally, Blizzard’s concept blended modern and fantasy elements, such as fighter pilots being ambushed by a fire-breathing dragon. In the Alpha version (it is still probably floating around somewhere on the Internet) which was given to magazines for review shortly before the game's release, players could, for example mine rocks, which acted as an additional required resource.

Several versions and bundles of WarCraft II were released over the years:

  • WarCraft II: Tides of Darkness, originally written for DOS, though it had a Windows launch screen and ran well under Windows 95. A Macintosh version was also released. The DOS version supported multiplayer games via null modem cable, modem, or IPX, while Mac players could also play via TCP/IP or AppleTalk.
  • WarCraft II: Beyond the Dark Portal, the expansion, released in April 1996.
  • WarCraft: Battle Chest, released in 1996, was a bundle which included WarCraft: Orcs and Humans, WarCraft II: Tides of Darkness, and WarCraft II: Beyond the Dark Portal.
  • WarCraft II: The Dark Saga, released in 1997, was a port for the Sega Saturn and PlayStation consoles by Electronic Arts, including the campaigns from both Tides of Darkness and Beyond the Dark Portal.
  • WarCraft II: Battle.net Edition, released in 1999, ported the game's code to Microsoft Windows, fixed some minor bugs, and enabled multiplayer support via Blizzard's online service, Battle.net.
  • WarCraft II Battle Chest, released in 1999, included the Battle.net Edition and its official strategy guide.
  • WarCraft II: Remastered, released in November 2024, is modern remaster of Tides of Darkness and Beyond the Dark Portal, with improved graphics and updated controls.

WarCraft II: Tides of Darkness received enthusiastic reviews, elevating Blizzard to the top ranks alongside Westwood Studios, id Software, and LucasArts. The rivalry between Blizzard's series and Westwood Studios' Command and Conquer series helped fuel the RTS boom of the late 1990s. PC Gamer US named WarCraft II the best game of 1995, calling it an "easy" choice and writing that "Warcraft II stand[s] out — way out — as the most impressive, most entertaining, game of 1995". The editors also awarded it Best Multi-Player Game of 1995.

WarCraft II was notable for the large number of third-party utilities created for it. Quickly, Daniel Lemberg reverse-engineered and published the map file (*.pud) format and wrote the first third-party map editor, War2xEd , which could do multiple things that the bundled map editor could not, such as editing unit attributes. Blizzard apparently began using War2xEd internally, and it influenced their decision to later ship a feature-rich map editor with StarCraft.

Next, Alexander Cech and Daniel Lemberg reverse-engineered the game data format, the WAR archives. Alexander Cech went on to make a hugely important tool called Wardraft , which allowed users to browse and modify the contents of the WAR archives. This enabled extensive game modifications, known as "Total Conversions". Many such projects gained popularity and remained in development for a long time, notable examples being DeathCraft: Twilight of Demons , War of the Ring , Editor's Total Conversion , Funcraft and Rituals of Rebirth .

Most of these utilities and conversions have long since faded into obscurity, but their legacy lives on. They impacted Blizzard's desicion to bundle ever more powerful editors and trigger systems into StarCraft and later WarCraft III, which in turn later spawned entire games such as Dota (which began as a WarCraft III map). Hopefully, someday (soon?) we can host some of the Total Conversions here at Jorvik Systems .


As a personal anecdote, I vividly remember two defining moments related to the game. I was young when it came out, and my dad's friend had pirated it; somehow the game ended up on our computer. I was too young to speak English at the time, and the interface was confusing to me, so a relative helped me understand the basics — how to make peons construct buildings, how to control units, and how to navigate around the map. I hadn't played computer games much before then, but from that moment on, I was arguably obsessed.

A second strong memory came a few months later, at my friend Erik 's house, on his Intel 486 PC. He was experimenting with the WarCraft II map editor, which I hadn't known existed, and I was blown away. I simply could not believe that Blizzard would ship such a tool with the game; to me, it meant that people could essentially create their own games by designing entirely new scenarios. It is quite possible that my fascination with modding was born in that very moment. We probably went outside to play shortly afterward, which I found incredibly lame — we had at our disposal the most powerful tool I could imagine, so why were we not inside using it?

39C3 Fahrplan 2025

Lobsters
fahrplan.events.ccc.de
2025-12-09 09:01:28
Comments...
Original Article

10:00

10:30

11:00

11:30

12:00

12:30

13:00

13:30

14:00

14:30

15:00

15:30

16:00

16:30

17:00

17:30

18:00

18:30

19:00

19:30

20:00

20:30

21:00

21:30

22:00

22:30

23:00

23:30

00:00

00:30

01:00

01:30

02:00

02:30

03:00

03:30

04:00

04:30

05:00

05:30

06:00

06:30

07:00

07:30

08:00

08:30

09:00

09:30

10:00

10:30

11:00

11:30

12:00

12:30

13:00

13:30

14:00

14:30

15:00

15:30

16:00

16:30

17:00

17:30

18:00

18:30

19:00

19:30

20:00

20:30

21:00

21:30

22:00

22:30

23:00

23:30

00:00

00:30

01:00

01:30

02:00

02:30

03:00

03:30

04:00

04:30

05:00

05:30

06:00

06:30

07:00

07:30

08:00

08:30

09:00

09:30

10:00

10:30

11:00

11:30

12:00

12:30

13:00

13:30

14:00

14:30

15:00

15:30

16:00

16:30

17:00

17:30

18:00

18:30

19:00

19:30

20:00

20:30

21:00

21:30

22:00

22:30

23:00

23:30

00:00

00:30

01:00

01:30

02:00

02:30

03:00

03:30

04:00

04:30

05:00

05:30

06:00

06:30

07:00

07:30

08:00

08:30

09:00

09:30

10:00

10:30

11:00

11:30

12:00

12:30

13:00

13:30

14:00

14:30

15:00

15:30

16:00

16:30

17:00

17:30

Sat - Day 1 - December 27

Sun - Day 2 - December 28

Mon - Day 3 - December 29

Tue - Day 4 - December 30

EU opens investigation into Google’s use of online content for AI models

Guardian
www.theguardian.com
2025-12-09 08:48:06
European Commission to assess whether Gemini owner is putting rival companies at a disadvantageBusiness live – latest updatesThe EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from web publishers and YouTube for artificial...
Original Article

The EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from web publishers and YouTube for artificial intelligence.

The European Commission said on Tuesday it will examine whether the US tech company, which runs the Gemini AI model and is owned by Alphabet , is putting rival AI owners at a “disadvantage”.

“The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage,” the commission said.

It said it was concerned that Google may have used content from web publishers to generate AI-powered services on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content.

The commission said it was also concerned as to whether Google has used content uploaded to YouTube to train its own generative AI models without offering creators compensation or the possibility to refuse.

“Content creators uploading videos on YouTube have an obligation to grant Google permission to use their data for different purposes, including for training generative AI models,” the commission said.

Google does not pay YouTube content creators for their content, nor does it allow them to upload their content on YouTube without allowing Google to use such data, it said. The commission noted that rival developers of AI models are barred by YouTube policies from using YouTube content to train their own AI models.

Last month, the head of Google’s parent company has said people should not “blindly trust” everything AI tools tell them .

Sundar Pichai, the chief executive of Alphabet, said AI models were “prone to errors” and urged people to use them alongside other tools.

skip past newsletter promotion

In the same interview, Pichai warned that no company would be immune if the AI bubble burst.

Reuters contributed to this report

xkcd 1313: Regex Golf (2014)

Lobsters
nbviewer.org
2025-12-09 08:41:52
Comments...
Original Article
def tests():
    assert subparts('^it$') == {'^', 'i', 't', '$', '^i', 'it', 't$', '^it', 'it$', '^it$'}
    assert subparts('this') == {'t', 'h', 'i', 's', 'th', 'hi', 'is', 'thi', 'his', 'this'}
    subparts('banana') == {'a', 'an', 'ana', 'anan', 'b', 'ba', 'ban', 'bana', 
                           'n', 'na', 'nan', 'nana'}
    
    assert dotify('it') == {'it', 'i.', '.t', '..'}
    assert dotify('^it$') == {'^it$', '^i.$', '^.t$', '^..$'}
    assert dotify('this') == {'this', 'thi.', 'th.s', 'th..', 't.is', 't.i.', 't..s', 't...',
                              '.his', '.hi.', '.h.s', '.h..', '..is', '..i.', '...s', '....'}
    assert regex_parts({'win'}, {'losers', 'bin', 'won'}) == {
        '^win$', '^win', '^wi.', 'wi.',  'wi', '^wi', 'win$', 'win', 'wi.$'}
    assert regex_parts({'win'}, {'bin', 'won', 'wine', 'wit'}) == {'^win$', 'win$'}
    regex_parts({'boy', 'coy'}, 
                {'ahoy', 'toy', 'book', 'cook', 'boycott', 'cowboy', 'cod', 'buy', 'oy', 
                 'foil', 'coyote'}) == {'^boy$', '^coy$', 'c.y$', 'coy$'}
    
    assert matches('a|b|c', {'a', 'b', 'c', 'd', 'e'}) == {'a', 'b', 'c'}
    assert matches('a|b|c', {'any', 'bee', 'succeed', 'dee', 'eee!'}) == {
        'any', 'bee', 'succeed'}
    
    assert OR(['a', 'b', 'c']) == 'a|b|c'
    assert OR(['a']) == 'a'
    
    assert words('this is a test this is') == {'this', 'is', 'a', 'test'}
    
    assert findregex({"ahahah", "ciao"},  {"ahaha", "bye"}) == 'a.$'
    assert findregex({"this", "that", "the other"}, {"one", "two", "here", "there"}) == 'h..$'
    assert findregex({'boy', 'coy', 'toy', 'joy'}, {'ahoy', 'buy', 'oy', 'foil'}) == '^.oy'
    
    assert not mistakes('a|b|c', {'ahoy', 'boy', 'coy'}, {'joy', 'toy'})
    assert not mistakes('^a|^b|^c', {'ahoy', 'boy', 'coy'}, {'joy', 'toy', 'kickback'})
    assert mistakes('^.oy', {'ahoy', 'boy', 'coy'}, {'joy', 'ploy'}) == {
        "Should have matched: ahoy", 
        "Should not have matched: joy"}
    return 'tests pass'

tests()

Trump clears way for Nvidia to sell powerful AI chips to China

Guardian
www.theguardian.com
2025-12-09 08:29:08
Commerce department finalizing deal to allow H200 chips to be sold to China as strict Biden-era restrictions relaxed Donald Trump has cleared the way for Nvidia to begin selling its powerful AI computer chips to China, marking a win for the chip maker and its CEO Jensen Huang, who has spent months l...
Original Article

Donald Trump has cleared the way for Nvidia to begin selling its powerful AI computer chips to China, marking a win for the chip maker and its CEO Jensen Huang, who has spent months lobbying the White House to open up sales in the country.

Before Monday’s announcement, the US had prohibited sales of Nvidia’s most advanced chips to China over national security concerns.

“I have informed President Xi, of China, that the United States will allow NVIDIA to ship its H200 products to approved customers in China, and other Countries, under conditions that allow for continued strong National Security,” Trump posted to Truth Social on Monday. “President Xi responded positively!”

Trump said the Department of Commerce is finalising the details and that he was planning to make the same offer to other chip companies, including Advanced Micro Devices (AMD) and Intel. Nvidia’s H200 chips are the company’s second-most powerful, and far more advanced than the H20, which was originally designed as a lower-powered model for the Chinese market, which wouldn’t breach restrictions, but which the US banned anyway in April.

The president said the US would receive 25% of the proceeds, more than the 15% previously agreed to with Nvidia in an earlier deal to lift restrictions, and following similar unorthodox plans for the federal government to take a financial cut from private business dealings. In August, Trump said the US would receive a 10% stake in the tech company Intel . Some lawmakers have questioned the legality of such arrangements.

According to the Hill, Democratic senators Elizabeth Warren of Massachusetts and Andy Kim of New Jersey sent a letter to commerce secretary Howard Lutnick last week, outlining their concerns with selling these chips to China and saying it risked powering the country’s “surveillance, censorship, and military applications”.

“I urge you to stop ignoring the input of bipartisan members of Congress and your own experts in order to cut deals that trade away America’s national security,” the senators wrote.

On social media, Warren called for Huang to appear before Congress to testify under oath.

Huang has worked closely with Trump since the inauguration, and has made several trips to the White House. The CEO attended the president’s AI summit in July, met with Trump as recently as last week and was even a guest at the White House dinner for the Saudi crown price Mohammed bin Salman. Huang has also pledged to invest $500bn in AI infrastructure in the US over the next four years.

Huang has also visited China several times, meeting with officials and Chinese tech executives, as US bans were variously lifted and reintroduced. Earlier this year, China imposed its own controls on the imports of Nvidia chips, with top tech firms reportedly instructed to cancel orders, citing national security concerns and confidence in China’s domestic chip development.

In October Huang said Nvidia has gone from having 95% of the Chinese market to having 0%, and called the bans a “strategic mistake”.

Now, selling chips to China – the world’s second-largest economy – could mean a windfall worth billions of dollars for Nvidia, which is already valued at $4.5tn.

“We applaud President Trump’s decision,” said a Nvidia spokesperson. He added that offering the H200 chips “to approved commercial customers, vetted by the Department of Commerce, strikes a thoughtful balance that is great for America”.

The Nvidia spokesperson and Trump said the move would support US jobs and manufacturing. In his Truth Social post, Trump condemned the Biden administration’s policies, which imposed strict export controls on powerful chips. The Biden administration had said withholding such technology from China bolstered US competition, protected national security and hampered AI development in China.

“That Era is OVER!” Trump wrote. “My Administration will always put America FIRST.” .

On Tuesday afternoon China’s foreign ministry said it had noted the reports.

“China has always adhered to the principle that China and the United States can achieve mutual benefit and win-win results through cooperation,” the spokesperson said.

Ma Jihua, a telecom industry analyst, told state media outlet, the Global Times, that years of US curbs on AI exports had “provided a rare chance of China’s domestic chip industry to grow and catch up”.